通常の計算方法です。全てのモードで使用できます。
2つのモデルの比較は、設定された比率を中心にコサイン類似度を使用して行われ、マージによる損失を排除するために計算されます。詳細については以下を参照してください。 #33 https://github.com/recoilme/losslessmix
元のシンプルな重みモードは最も基本的な方法で、与えられた重みアルファに基づいて2つのモデル間で線形に補間します。アルファが0の場合、出力は最初のモデル(モデルA)で、アルファが1の場合、出力は二番目のモデル(モデルB)です。アルファの他の値は、2つのモデルの重み付き平均になります。
コサイン手法の一つの大きな利点は、元のシンプルな加算と比べて、2つのモデル間の構造的な類似性を考慮に入れる点です。これにより、2つのモデルが似ているが同一ではない場合に、より良い結果が得られる可能性があります。また、コサイン手法のもう一つの利点は、一方のモデルから他方へ組み込まれる詳細の量を制限することで、過学習を防ぎ、一般化を改善するのに役立つ点です。
CosineAの場合, 最初のモデル(モデルA)のベクトルをマージする前に正規化するため、結果として得られるマージモデルは、第一モデルの構造を優先しつつ、第二モデルからの詳細を組み込むことになります。これは、基本的に第一モデルのベクトルの方向を、第二モデルの対応するベクトルの方向に合わせているからです。
描き込みに関して上下を比べると、すべての場合において、通常のマージの線形な変化とは異なり、背景に対して前景よりもより多くのぼかしが保持されている点に注意してください。
CosineBの場合一方で、CosineBでは、マージする前に第二モデル(モデルB)のベクトルを正規化するため、結果として得られるマージモデルは、第二モデルの構造を優先しつつ、第一モデルからの詳細を組み込むことになります。これは、第二モデルのベクトルの方向を、第一モデルの対応するベクトルの方向に合わせているためです。
要約すると、CosineAとCosineBの選択は、結果として得られるマージモデルでどちらのモデルの構造を優先したいかに依存します。もし第一モデルの構造を優先したい場合はCosineAを使用し、第二モデルの構造を優先したい場合はCosineBを使用します。
また、Alpha 1での変化と比較して第二モデルがマージングの「参照点」となっていることにも注目してください。したがって、モデルの順序も変更することで、望ましい出力を得るための最終結果に影響を与える可能性があります。
この方法は、そのもっとも単純な形では、「永続的なマージのためのスーパーローラ」と考えることができます。これにより、モデル(B)と(C)の間の計算された差をモデル(A)に加えるのではなく、その差をモデル(A)に対して微調整しているかのように「訓練」します。
- 通常の addDifference(差分の追加) と trainDifference(差分の訓練) の違い
With rev animated and isometric-future
"IsometricFuture, garden, IsometricFuture"
Generated with addDifference ('rev animated')+('isometric future'-'sdv1.5')
Generated with trainDifference ('rev animated')+('isometric future'-'sdv1.5')
With rev animated and anything v3
"man smiling" Generated with 'rev animated'
Generated with addDifference ('rev animated')+('anything v3'-'sdv1.4')
Generated with trainDifference ('rev animated')+('anything v3'-'sdv1.4')
- Lora vs trainDifference LoRAがこのような理由から無効になるわけではありません。LoRAはその使い勝手の良さ、プラグアンドプレイの柔軟性などから、引き続き有用です。
しかし、あるモデルは「LoRAとうまく合わない」という話がよくされています。Civitaiのユーザーのために、LoRAとの関係でトレーニングできる「AnyLoRA」のようなモデルも開発されています。 この利点を生かす方法とtrainDifferenceでのトレーニング方法はこちらを参照してください。
明らかにアニメ風LoRAに必要な「互換性」から距離のあるFeverDreamと、LoRAバージョンとAnything V4.5との事前マージバージョンの両方を提供したThicker Lines Anime Style LoRA Mixを使用し、FeverDreamのLoRAとtrainDifference(FeverDream + Thicker Lines - Anythingv4.5)の直接比較を行います。
LoRA強度とマージ強度を1/1.2/1.4/1.6と変化させて比較します。極端な値で、LoRAがtrainDifferenceとどう異なるかが最もわかりやすいでしょう。
マージする代わりにモデルを新しい概念で拡張するか、既存の概念(と品質の高いアウトプット)を強化する
Sci-Fi Diffusionは、一般的なSF画像を使ってトレーニングされたモデルです。もう他のモデルと統合する必要はなく、SDv1.5に対してtrainDifferencingを行うことで、実質的にモデルにSci-Fi要素をトレーニングすることができます。LoRAのような近似差分を生成する制限はありません。
別の例として、性質が似ているAnalog DiffusionとTimeless Diffusionをコサイン類似度で統合し、それに写真のネガティブな要素を強化しすぎないように注意しつつ、中距離のボディショットに焦点を当てたModelshoot Styleをトレーニングすることもできます。
さらに、Surrealityやseek.art MEGAのような広範なモデルに対して、V2でライセンス制限が緩和されたおかげで、これまでよりもはるかに大きな可能性があります。もちろん、入出力の異なるウェイトでのスタイリングには依然として価値がありますが、すべては目標によって異なります。
また、RPGのようなモデルは、SDv1.5から開発されているようで、F222等の統合から生じるNSFWや女性バイアスなしでモデルにトレーニングを行うことができます。
トレーニングの差異の方向とスタイルが重要
リアリスティックなモデルをトレーニングすることは、スタイリスティックなモデルをトレーニングするよりも難しいです。たとえば、最終的にスタイリスティックなモデルを目指す場合は、似たスタイルに基づいて複数のモデルブランチを作り、最終的にスタイリスティックなブランチを最もリアリスティックなブランチに対してtrainDifferenceすることを検討してください。一般的に、スタイルが異なる場合はアニメ/カートゥーン > スタイリッシュ > リアリスティックの順で統合すべきです。
trainDifferenceが常に最適解ではない
場合によっては、差異のタイプや範囲によって、コサイン類似度による統合がより良い結果をもたらすことがあります(差異がSDv1.5からではない場合、まずSDv1.5に対してtrainDifferenceを行い、その後コサイン類似度で統合してから、作業中のモデルに対してtrainDifferenceを行います)。また、素材が似ているが広範囲にわたる場合、最良の結果を得るためには、両方向にtrainDifferenceを使用し、その2つの間で重み付き合計を行うことが最適な場合があります。例えばwaifu diffusionやAcertaintyが該当します。
どこでも訓練されたモデルの利点を得る
KnollingcaseやBubble Toysのようなモデルは魅力的ですが、これらはトレーニングされたフレームワークによってその効果が限られていました。今では、これらのモデルを他の人が開発した新しいモデルにtrainDifferenceでトレーニングすることができます。
さらに、LoRAではなくチェックポイントを作成した人々の中には、最初にLoRAを試したものの有益な結果を得られなかったと述べている人もいますが、trainDifferenceを使えば彼らの作業をどのモデルにも適用することができます。
モデルの事前トレーニングの起源を知り、アクセスすることが必要
現在、多くのモデルがSDv1.4の何らかの混合を持っています。このtrainDifference統合は十分に正確であり、例えば「rev animated」をSDv1.5をモデル(C)として「Sci-fi Diffusion」にトレーニングしようとすると、問題が生じます。なぜなら、「rev animated」の起源はSDv1.4とSDv1.5の間の不明な比率(そして個々の入出力のウェイトの混合も)であるため、統合は出力に悪影響を与えます(「トレーニング」がずれたり歪んだりする)。しかし、「Sci-fi Diffusion」をSDv1.5でトレーニングされた「rev animated」に対してtrainDifferenceすることは可能です。十分な時間が経過すると、または類似した素材を使用すると、「焼き込み」/「過度のトレーニング」が発生する可能性がある
この時点で、モデルをSDv1.5とのコサイン類似度による統合によって「引き戻す」ことができます。これにより、トレーニングから得た質を保ちつつ、モデルを基盤に戻すことができます。
十分な量の統合が行われると、「クリップ/理解」が重くなり、シンプルなプロンプトに悪影響を与える可能性がある
たとえば、複雑なプロンプトは依然として良い見た目を保つかもしれませんが、「女性の肖像、青い目」といったシンプルなプロンプトでは「青」の概念が過度に現れるかもしれません。これを避けるためには、trainDifference統合や広範囲のスコープを行う際に、model toolkitを使用してクリップを操作することができます。最終モデルをその拡張機能に読み込み、2つの異なるモデルを作成します。'clipA' は基本モデルのクリップをインポートし、'clipB' はトレーニングしたもののクリップをインポートします。そして、これら2つのモデル間で通常のweightsum統合を使用して、最適な出力/理解を見つけ、モデルを拡張する際にクリップを和らげます。場合によっては、最終モデルをSDv1.5のクリップを使用したバージョンとweightsum統合することが、clipAとclipBの間で混合するよりも良い結果をもたらすことがあります。
このテキストは、AIモデルトレーニングの実践的なデモンストレーションに関する説明です。翻訳します。
- これを活用するよりシンプルな方法の一つは、異なるモデルに対してより自然で正確なLoRAスタイリングを行うことです。
ここでは、BreakDomainAnimeと、AnyLoraでトレーニングされたMika Pikazo Style LoRAを使用します。
"1girl, smiling, scenic background BREAK [mika-pikazo]"
'BreakDomainAnime'で生成された画像
'Mika Pikazo Style LoRA'を1の強度で使用して'BreakDomainAnime'で生成された画像
ここでは、LoRAを'BreakDomainAnime'に適用するのではなく、trainDifferenceを使用してより良いアライメントを得ます。 SuperMergerのLoRAタブを使用し、「Mika Pikazo Style LoRA」をトレーニング用に記述されたチェックポイント「anyloraCheckpoint_novaeFp16」(彼らがトレーニングに使用していると推測されるもの)に統合し、「anyloraCheckpoint_mika_pikazo」として保存します。 その後 trainDifference ('BreakDomainAnime')+('任意のLoRA組み合わせをAnyLoraに統合、この場合anyloraCheckpoint_mika_pikazo'-'AnyLora') を使用して生成
LoRAと上記の技術を使ってもう一つの例を示します。もともとアニメモデルでトレーニングされた背景LoRAをリアリスティックなモデルに移行するtrainDifferenceです。 "An eco-friendly residential building covered in vertical gardens in an urban setting"
MedianフィルタとGaussianフィルタの利点を組み合わせた差分追加の方法です。この方法では、多くのモデルをこのように追加するときに見られるネガティブな「焼き付け」効果を避けつつ、よりスムーズな方法でモデルの差分を追加しようとします。これは単に差分を低い値で追加するだけでは得られない効果をもたらします。
- 参照のための出発点
- それに対して、それぞれの値が1であるモデルのコレクションを追加
ここでは焼き付けが非常に明白
- それに対して、それぞれの値が0.5であるモデルのコレクションを追加
特に鳥の部分で見ると、まだ私が受け入れる結果ではありません
単独のMedianフィルタの機能と結果
単独のGaussianフィルタの機能と結果
- MedianフィルタとGaussianフィルタの組み合わせを使用した最終結果 特に、Median/Gaussianフィルタを個別に使用した場合と比較して、右上の画像で左上の男性の髪が「固まらない」ことがわかります。ここでの組み合わせが全体的に最良の結果をもたらしています
ヒント 時には、焼き付けのリスクがなくても、通常のAddとは異なるsmooth Add差分を使用したい場合があります。 これらのケースでは、smooth Addで1よりも個別の影響が小さいため、Alphaを2まで上げることができます。しかし、これはもちろん望む結果に依存します。
この方法は、共通の基本モデルに基づいて構築された2つの差分モデルから、類似または非類似の特徴を抽出するために設計されています。
この構成では、基本モデル (モデルA) と、モデルAから派生した2つのモデル (モデルBとモデルC) を使用します。関連する2つの差分モデルは、「モデルB - モデルA」と「モデルC - モデルA」です。偽の類似性を減らすために、モデルAは、2つの派生モデルの最新の共通祖先となっていることが望ましいです。
この構成では、2つの差分モデルとして、LoRA-BとLoRA-Cを直接使用します。これらのモデルは、3モデル構成のモデルAと同様に、共通の基本モデルに基づいて学習されていると仮定されます。もし、LoRA-BとLoRA-Cが異なる基本モデルから派生している場合、基本モデルの相違により結果が予測不可能になる可能性があります。
- alpha (α): モデル(LoRA)B (α = 0) とモデル(LoRA)C (α = 1) の間で特徴抽出の焦点を制御します。
- beta (β): 特徴抽出の性質を制御し、β = 0では類似特徴抽出、β = 1では非類似特徴抽出を意味します。
- gamma (γ): 特徴の類似性・非類似性の判定の程度を調整します。高いγ (例えば、γ = 10) は、より類似した特徴のみを類似として認識します。逆に、低いγ (例えば、γ = 0.1) は、より非類似した特徴のみを非類似として認識します。
- α = 0, β = 0: モデルBの特徴からモデルCの特徴と類似しているものを抽出します。
-
α = 0, β = 0.5: 両者の中間に位置するため、類似特徴抽出でも非類似特徴抽出でもなく、下記の式によって表される単純な結果になります。
-
フルパラメーターモデルの場合:
$\frac{\text{A} + \text{lerp}(\text{B}, \text{C}, \alpha)}{2}$ -
LoRAネットワークの場合:
$\frac{\text{lerp}(\text{B}, \text{C}, \alpha)}{2}$
-
フルパラメーターモデルの場合:
- α = 0, β = 1: モデルBの特徴からモデルCの特徴と非類似しているものを抽出します。
- α = 1: 上記例でのモデルBとモデルCの役目を逆転させます。
通常のマージでは下図のようにテンソルの加算が行われノーマライズ(2分の1)されます。tensorではテンソルの部分部分を取り替えることによりマージが行われます。これはつまり、元のモデルのテンソルがそのまま維持されることを意味し、通常のマージとは異なる結果を得られます。tensorとtensor2の違いは、tensor2の場合、次元の大きなテンソル([1280, 1280]のような)の時のみ、2時限目を基準にして分割しますす。tensorの場合は1次元めです。
The tensor size of each element is noted below.
model.diffusion_model.time_embed.0.weight torch.Size([1280, 320])
model.diffusion_model.time_embed.0.bias torch.Size([1280])
model.diffusion_model.time_embed.2.weight torch.Size([1280, 1280])
model.diffusion_model.time_embed.2.bias torch.Size([1280])
model.diffusion_model.input_blocks.0.0.weight torch.Size([320, 4, 3, 3])
model.diffusion_model.input_blocks.0.0.bias torch.Size([320])
model.diffusion_model.input_blocks.1.0.in_layers.0.weight torch.Size([320])
model.diffusion_model.input_blocks.1.0.in_layers.0.bias torch.Size([320])
model.diffusion_model.input_blocks.1.0.in_layers.2.weight torch.Size([320, 320, 3, 3])
model.diffusion_model.input_blocks.1.0.in_layers.2.bias torch.Size([320])
model.diffusion_model.input_blocks.1.0.emb_layers.1.weight torch.Size([320, 1280])
model.diffusion_model.input_blocks.1.0.emb_layers.1.bias torch.Size([320])
model.diffusion_model.input_blocks.1.0.out_layers.0.weight torch.Size([320])
model.diffusion_model.input_blocks.1.0.out_layers.0.bias torch.Size([320])
model.diffusion_model.input_blocks.1.0.out_layers.3.weight torch.Size([320, 320, 3, 3])
model.diffusion_model.input_blocks.1.0.out_layers.3.bias torch.Size([320])
model.diffusion_model.input_blocks.1.1.norm.weight torch.Size([320])
model.diffusion_model.input_blocks.1.1.norm.bias torch.Size([320])
model.diffusion_model.input_blocks.1.1.proj_in.weight torch.Size([320, 320, 1, 1])
model.diffusion_model.input_blocks.1.1.proj_in.bias torch.Size([320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_q.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_k.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_v.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([2560, 320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([2560])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.2.weight torch.Size([320, 1280])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.2.bias torch.Size([320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_q.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight torch.Size([320, 768])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight torch.Size([320, 768])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm1.weight torch.Size([320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm1.bias torch.Size([320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm2.weight torch.Size([320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm2.bias torch.Size([320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm3.weight torch.Size([320])
model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm3.bias torch.Size([320])
model.diffusion_model.input_blocks.1.1.proj_out.weight torch.Size([320, 320, 1, 1])
model.diffusion_model.input_blocks.1.1.proj_out.bias torch.Size([320])
model.diffusion_model.input_blocks.2.0.in_layers.0.weight torch.Size([320])
model.diffusion_model.input_blocks.2.0.in_layers.0.bias torch.Size([320])
model.diffusion_model.input_blocks.2.0.in_layers.2.weight torch.Size([320, 320, 3, 3])
model.diffusion_model.input_blocks.2.0.in_layers.2.bias torch.Size([320])
model.diffusion_model.input_blocks.2.0.emb_layers.1.weight torch.Size([320, 1280])
model.diffusion_model.input_blocks.2.0.emb_layers.1.bias torch.Size([320])
model.diffusion_model.input_blocks.2.0.out_layers.0.weight torch.Size([320])
model.diffusion_model.input_blocks.2.0.out_layers.0.bias torch.Size([320])
model.diffusion_model.input_blocks.2.0.out_layers.3.weight torch.Size([320, 320, 3, 3])
model.diffusion_model.input_blocks.2.0.out_layers.3.bias torch.Size([320])
model.diffusion_model.input_blocks.2.1.norm.weight torch.Size([320])
model.diffusion_model.input_blocks.2.1.norm.bias torch.Size([320])
model.diffusion_model.input_blocks.2.1.proj_in.weight torch.Size([320, 320, 1, 1])
model.diffusion_model.input_blocks.2.1.proj_in.bias torch.Size([320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_q.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_k.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_v.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([2560, 320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([2560])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.2.weight torch.Size([320, 1280])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.2.bias torch.Size([320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_q.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight torch.Size([320, 768])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight torch.Size([320, 768])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([320, 320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm1.weight torch.Size([320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm1.bias torch.Size([320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm2.weight torch.Size([320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm2.bias torch.Size([320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm3.weight torch.Size([320])
model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm3.bias torch.Size([320])
model.diffusion_model.input_blocks.2.1.proj_out.weight torch.Size([320, 320, 1, 1])
model.diffusion_model.input_blocks.2.1.proj_out.bias torch.Size([320])
model.diffusion_model.input_blocks.3.0.op.weight torch.Size([320, 320, 3, 3])
model.diffusion_model.input_blocks.3.0.op.bias torch.Size([320])
model.diffusion_model.input_blocks.4.0.in_layers.0.weight torch.Size([320])
model.diffusion_model.input_blocks.4.0.in_layers.0.bias torch.Size([320])
model.diffusion_model.input_blocks.4.0.in_layers.2.weight torch.Size([640, 320, 3, 3])
model.diffusion_model.input_blocks.4.0.in_layers.2.bias torch.Size([640])
model.diffusion_model.input_blocks.4.0.emb_layers.1.weight torch.Size([640, 1280])
model.diffusion_model.input_blocks.4.0.emb_layers.1.bias torch.Size([640])
model.diffusion_model.input_blocks.4.0.out_layers.0.weight torch.Size([640])
model.diffusion_model.input_blocks.4.0.out_layers.0.bias torch.Size([640])
model.diffusion_model.input_blocks.4.0.out_layers.3.weight torch.Size([640, 640, 3, 3])
model.diffusion_model.input_blocks.4.0.out_layers.3.bias torch.Size([640])
model.diffusion_model.input_blocks.4.0.skip_connection.weight torch.Size([640, 320, 1, 1])
model.diffusion_model.input_blocks.4.0.skip_connection.bias torch.Size([640])
model.diffusion_model.input_blocks.4.1.norm.weight torch.Size([640])
model.diffusion_model.input_blocks.4.1.norm.bias torch.Size([640])
model.diffusion_model.input_blocks.4.1.proj_in.weight torch.Size([640, 640, 1, 1])
model.diffusion_model.input_blocks.4.1.proj_in.bias torch.Size([640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_q.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_k.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_v.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([5120, 640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([5120])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.2.weight torch.Size([640, 2560])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.2.bias torch.Size([640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_q.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight torch.Size([640, 768])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight torch.Size([640, 768])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm1.weight torch.Size([640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm1.bias torch.Size([640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm2.weight torch.Size([640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm2.bias torch.Size([640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm3.weight torch.Size([640])
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm3.bias torch.Size([640])
model.diffusion_model.input_blocks.4.1.proj_out.weight torch.Size([640, 640, 1, 1])
model.diffusion_model.input_blocks.4.1.proj_out.bias torch.Size([640])
model.diffusion_model.input_blocks.5.0.in_layers.0.weight torch.Size([640])
model.diffusion_model.input_blocks.5.0.in_layers.0.bias torch.Size([640])
model.diffusion_model.input_blocks.5.0.in_layers.2.weight torch.Size([640, 640, 3, 3])
model.diffusion_model.input_blocks.5.0.in_layers.2.bias torch.Size([640])
model.diffusion_model.input_blocks.5.0.emb_layers.1.weight torch.Size([640, 1280])
model.diffusion_model.input_blocks.5.0.emb_layers.1.bias torch.Size([640])
model.diffusion_model.input_blocks.5.0.out_layers.0.weight torch.Size([640])
model.diffusion_model.input_blocks.5.0.out_layers.0.bias torch.Size([640])
model.diffusion_model.input_blocks.5.0.out_layers.3.weight torch.Size([640, 640, 3, 3])
model.diffusion_model.input_blocks.5.0.out_layers.3.bias torch.Size([640])
model.diffusion_model.input_blocks.5.1.norm.weight torch.Size([640])
model.diffusion_model.input_blocks.5.1.norm.bias torch.Size([640])
model.diffusion_model.input_blocks.5.1.proj_in.weight torch.Size([640, 640, 1, 1])
model.diffusion_model.input_blocks.5.1.proj_in.bias torch.Size([640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_q.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_k.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_v.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([5120, 640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([5120])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.2.weight torch.Size([640, 2560])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.2.bias torch.Size([640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_q.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight torch.Size([640, 768])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight torch.Size([640, 768])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([640, 640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm1.weight torch.Size([640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm1.bias torch.Size([640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm2.weight torch.Size([640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm2.bias torch.Size([640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm3.weight torch.Size([640])
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm3.bias torch.Size([640])
model.diffusion_model.input_blocks.5.1.proj_out.weight torch.Size([640, 640, 1, 1])
model.diffusion_model.input_blocks.5.1.proj_out.bias torch.Size([640])
model.diffusion_model.input_blocks.6.0.op.weight torch.Size([640, 640, 3, 3])
model.diffusion_model.input_blocks.6.0.op.bias torch.Size([640])
model.diffusion_model.input_blocks.7.0.in_layers.0.weight torch.Size([640])
model.diffusion_model.input_blocks.7.0.in_layers.0.bias torch.Size([640])
model.diffusion_model.input_blocks.7.0.in_layers.2.weight torch.Size([1280, 640, 3, 3])
model.diffusion_model.input_blocks.7.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.7.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.input_blocks.7.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.input_blocks.7.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.0.skip_connection.weight torch.Size([1280, 640, 1, 1])
model.diffusion_model.input_blocks.7.0.skip_connection.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.1.norm.weight torch.Size([1280])
model.diffusion_model.input_blocks.7.1.norm.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.1.proj_in.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.input_blocks.7.1.proj_in.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_k.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_v.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([10240, 1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([10240])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.2.weight torch.Size([1280, 5120])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.2.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm1.weight torch.Size([1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm1.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm2.weight torch.Size([1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm2.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm3.weight torch.Size([1280])
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm3.bias torch.Size([1280])
model.diffusion_model.input_blocks.7.1.proj_out.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.input_blocks.7.1.proj_out.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.0.in_layers.0.weight torch.Size([1280])
model.diffusion_model.input_blocks.8.0.in_layers.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.0.in_layers.2.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.input_blocks.8.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.8.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.input_blocks.8.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.input_blocks.8.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.1.norm.weight torch.Size([1280])
model.diffusion_model.input_blocks.8.1.norm.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.1.proj_in.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.input_blocks.8.1.proj_in.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_k.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_v.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([10240, 1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([10240])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.2.weight torch.Size([1280, 5120])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.2.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm1.weight torch.Size([1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm1.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm2.weight torch.Size([1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm2.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm3.weight torch.Size([1280])
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm3.bias torch.Size([1280])
model.diffusion_model.input_blocks.8.1.proj_out.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.input_blocks.8.1.proj_out.bias torch.Size([1280])
model.diffusion_model.input_blocks.9.0.op.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.input_blocks.9.0.op.bias torch.Size([1280])
model.diffusion_model.input_blocks.10.0.in_layers.0.weight torch.Size([1280])
model.diffusion_model.input_blocks.10.0.in_layers.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.10.0.in_layers.2.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.input_blocks.10.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.input_blocks.10.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.10.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.input_blocks.10.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.input_blocks.10.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.10.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.input_blocks.10.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.input_blocks.11.0.in_layers.0.weight torch.Size([1280])
model.diffusion_model.input_blocks.11.0.in_layers.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.11.0.in_layers.2.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.input_blocks.11.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.input_blocks.11.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.input_blocks.11.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.input_blocks.11.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.input_blocks.11.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.input_blocks.11.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.input_blocks.11.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.middle_block.0.in_layers.0.weight torch.Size([1280])
model.diffusion_model.middle_block.0.in_layers.0.bias torch.Size([1280])
model.diffusion_model.middle_block.0.in_layers.2.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.middle_block.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.middle_block.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.middle_block.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.middle_block.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.middle_block.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.middle_block.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.middle_block.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.middle_block.1.norm.weight torch.Size([1280])
model.diffusion_model.middle_block.1.norm.bias torch.Size([1280])
model.diffusion_model.middle_block.1.proj_in.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.middle_block.1.proj_in.bias torch.Size([1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_k.weight torch.Size([1280, 1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_v.weight torch.Size([1280, 1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([10240, 1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([10240])
model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.2.weight torch.Size([1280, 5120])
model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.2.bias torch.Size([1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768])
model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768])
model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.norm1.weight torch.Size([1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.norm1.bias torch.Size([1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.norm2.weight torch.Size([1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.norm2.bias torch.Size([1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.norm3.weight torch.Size([1280])
model.diffusion_model.middle_block.1.transformer_blocks.0.norm3.bias torch.Size([1280])
model.diffusion_model.middle_block.1.proj_out.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.middle_block.1.proj_out.bias torch.Size([1280])
model.diffusion_model.middle_block.2.in_layers.0.weight torch.Size([1280])
model.diffusion_model.middle_block.2.in_layers.0.bias torch.Size([1280])
model.diffusion_model.middle_block.2.in_layers.2.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.middle_block.2.in_layers.2.bias torch.Size([1280])
model.diffusion_model.middle_block.2.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.middle_block.2.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.middle_block.2.out_layers.0.weight torch.Size([1280])
model.diffusion_model.middle_block.2.out_layers.0.bias torch.Size([1280])
model.diffusion_model.middle_block.2.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.middle_block.2.out_layers.3.bias torch.Size([1280])
model.diffusion_model.output_blocks.0.0.in_layers.0.weight torch.Size([2560])
model.diffusion_model.output_blocks.0.0.in_layers.0.bias torch.Size([2560])
model.diffusion_model.output_blocks.0.0.in_layers.2.weight torch.Size([1280, 2560, 3, 3])
model.diffusion_model.output_blocks.0.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.output_blocks.0.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.0.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.output_blocks.0.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.output_blocks.0.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.0.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.output_blocks.0.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.output_blocks.0.0.skip_connection.weight torch.Size([1280, 2560, 1, 1])
model.diffusion_model.output_blocks.0.0.skip_connection.bias torch.Size([1280])
model.diffusion_model.output_blocks.1.0.in_layers.0.weight torch.Size([2560])
model.diffusion_model.output_blocks.1.0.in_layers.0.bias torch.Size([2560])
model.diffusion_model.output_blocks.1.0.in_layers.2.weight torch.Size([1280, 2560, 3, 3])
model.diffusion_model.output_blocks.1.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.output_blocks.1.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.1.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.output_blocks.1.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.output_blocks.1.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.1.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.output_blocks.1.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.output_blocks.1.0.skip_connection.weight torch.Size([1280, 2560, 1, 1])
model.diffusion_model.output_blocks.1.0.skip_connection.bias torch.Size([1280])
model.diffusion_model.output_blocks.2.0.in_layers.0.weight torch.Size([2560])
model.diffusion_model.output_blocks.2.0.in_layers.0.bias torch.Size([2560])
model.diffusion_model.output_blocks.2.0.in_layers.2.weight torch.Size([1280, 2560, 3, 3])
model.diffusion_model.output_blocks.2.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.output_blocks.2.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.2.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.output_blocks.2.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.output_blocks.2.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.2.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.output_blocks.2.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.output_blocks.2.0.skip_connection.weight torch.Size([1280, 2560, 1, 1])
model.diffusion_model.output_blocks.2.0.skip_connection.bias torch.Size([1280])
model.diffusion_model.output_blocks.2.1.conv.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.output_blocks.2.1.conv.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.0.in_layers.0.weight torch.Size([2560])
model.diffusion_model.output_blocks.3.0.in_layers.0.bias torch.Size([2560])
model.diffusion_model.output_blocks.3.0.in_layers.2.weight torch.Size([1280, 2560, 3, 3])
model.diffusion_model.output_blocks.3.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.3.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.output_blocks.3.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.output_blocks.3.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.0.skip_connection.weight torch.Size([1280, 2560, 1, 1])
model.diffusion_model.output_blocks.3.0.skip_connection.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.1.norm.weight torch.Size([1280])
model.diffusion_model.output_blocks.3.1.norm.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.1.proj_in.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.output_blocks.3.1.proj_in.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_k.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_v.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([10240, 1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([10240])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.weight torch.Size([1280, 5120])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm1.weight torch.Size([1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm1.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm2.weight torch.Size([1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm2.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm3.weight torch.Size([1280])
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm3.bias torch.Size([1280])
model.diffusion_model.output_blocks.3.1.proj_out.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.output_blocks.3.1.proj_out.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.0.in_layers.0.weight torch.Size([2560])
model.diffusion_model.output_blocks.4.0.in_layers.0.bias torch.Size([2560])
model.diffusion_model.output_blocks.4.0.in_layers.2.weight torch.Size([1280, 2560, 3, 3])
model.diffusion_model.output_blocks.4.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.4.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.output_blocks.4.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.output_blocks.4.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.0.skip_connection.weight torch.Size([1280, 2560, 1, 1])
model.diffusion_model.output_blocks.4.0.skip_connection.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.1.norm.weight torch.Size([1280])
model.diffusion_model.output_blocks.4.1.norm.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.1.proj_in.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.output_blocks.4.1.proj_in.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_k.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_v.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([10240, 1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([10240])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.weight torch.Size([1280, 5120])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm1.weight torch.Size([1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm1.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm2.weight torch.Size([1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm2.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm3.weight torch.Size([1280])
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm3.bias torch.Size([1280])
model.diffusion_model.output_blocks.4.1.proj_out.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.output_blocks.4.1.proj_out.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.0.in_layers.0.weight torch.Size([1920])
model.diffusion_model.output_blocks.5.0.in_layers.0.bias torch.Size([1920])
model.diffusion_model.output_blocks.5.0.in_layers.2.weight torch.Size([1280, 1920, 3, 3])
model.diffusion_model.output_blocks.5.0.in_layers.2.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.0.emb_layers.1.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.5.0.emb_layers.1.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.0.out_layers.0.weight torch.Size([1280])
model.diffusion_model.output_blocks.5.0.out_layers.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.0.out_layers.3.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.output_blocks.5.0.out_layers.3.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.0.skip_connection.weight torch.Size([1280, 1920, 1, 1])
model.diffusion_model.output_blocks.5.0.skip_connection.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.1.norm.weight torch.Size([1280])
model.diffusion_model.output_blocks.5.1.norm.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.1.proj_in.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.output_blocks.5.1.proj_in.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_k.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_v.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([10240, 1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([10240])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.weight torch.Size([1280, 5120])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_q.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight torch.Size([1280, 768])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight torch.Size([1280, 768])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([1280, 1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm1.weight torch.Size([1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm1.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm2.weight torch.Size([1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm2.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm3.weight torch.Size([1280])
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm3.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.1.proj_out.weight torch.Size([1280, 1280, 1, 1])
model.diffusion_model.output_blocks.5.1.proj_out.bias torch.Size([1280])
model.diffusion_model.output_blocks.5.2.conv.weight torch.Size([1280, 1280, 3, 3])
model.diffusion_model.output_blocks.5.2.conv.bias torch.Size([1280])
model.diffusion_model.output_blocks.6.0.in_layers.0.weight torch.Size([1920])
model.diffusion_model.output_blocks.6.0.in_layers.0.bias torch.Size([1920])
model.diffusion_model.output_blocks.6.0.in_layers.2.weight torch.Size([640, 1920, 3, 3])
model.diffusion_model.output_blocks.6.0.in_layers.2.bias torch.Size([640])
model.diffusion_model.output_blocks.6.0.emb_layers.1.weight torch.Size([640, 1280])
model.diffusion_model.output_blocks.6.0.emb_layers.1.bias torch.Size([640])
model.diffusion_model.output_blocks.6.0.out_layers.0.weight torch.Size([640])
model.diffusion_model.output_blocks.6.0.out_layers.0.bias torch.Size([640])
model.diffusion_model.output_blocks.6.0.out_layers.3.weight torch.Size([640, 640, 3, 3])
model.diffusion_model.output_blocks.6.0.out_layers.3.bias torch.Size([640])
model.diffusion_model.output_blocks.6.0.skip_connection.weight torch.Size([640, 1920, 1, 1])
model.diffusion_model.output_blocks.6.0.skip_connection.bias torch.Size([640])
model.diffusion_model.output_blocks.6.1.norm.weight torch.Size([640])
model.diffusion_model.output_blocks.6.1.norm.bias torch.Size([640])
model.diffusion_model.output_blocks.6.1.proj_in.weight torch.Size([640, 640, 1, 1])
model.diffusion_model.output_blocks.6.1.proj_in.bias torch.Size([640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_q.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_k.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_v.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([5120, 640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([5120])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.2.weight torch.Size([640, 2560])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.2.bias torch.Size([640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_q.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight torch.Size([640, 768])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight torch.Size([640, 768])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm1.weight torch.Size([640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm1.bias torch.Size([640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm2.weight torch.Size([640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm2.bias torch.Size([640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm3.weight torch.Size([640])
model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm3.bias torch.Size([640])
model.diffusion_model.output_blocks.6.1.proj_out.weight torch.Size([640, 640, 1, 1])
model.diffusion_model.output_blocks.6.1.proj_out.bias torch.Size([640])
model.diffusion_model.output_blocks.7.0.in_layers.0.weight torch.Size([1280])
model.diffusion_model.output_blocks.7.0.in_layers.0.bias torch.Size([1280])
model.diffusion_model.output_blocks.7.0.in_layers.2.weight torch.Size([640, 1280, 3, 3])
model.diffusion_model.output_blocks.7.0.in_layers.2.bias torch.Size([640])
model.diffusion_model.output_blocks.7.0.emb_layers.1.weight torch.Size([640, 1280])
model.diffusion_model.output_blocks.7.0.emb_layers.1.bias torch.Size([640])
model.diffusion_model.output_blocks.7.0.out_layers.0.weight torch.Size([640])
model.diffusion_model.output_blocks.7.0.out_layers.0.bias torch.Size([640])
model.diffusion_model.output_blocks.7.0.out_layers.3.weight torch.Size([640, 640, 3, 3])
model.diffusion_model.output_blocks.7.0.out_layers.3.bias torch.Size([640])
model.diffusion_model.output_blocks.7.0.skip_connection.weight torch.Size([640, 1280, 1, 1])
model.diffusion_model.output_blocks.7.0.skip_connection.bias torch.Size([640])
model.diffusion_model.output_blocks.7.1.norm.weight torch.Size([640])
model.diffusion_model.output_blocks.7.1.norm.bias torch.Size([640])
model.diffusion_model.output_blocks.7.1.proj_in.weight torch.Size([640, 640, 1, 1])
model.diffusion_model.output_blocks.7.1.proj_in.bias torch.Size([640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_q.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_k.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_v.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([5120, 640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([5120])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.2.weight torch.Size([640, 2560])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.2.bias torch.Size([640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_q.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight torch.Size([640, 768])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight torch.Size([640, 768])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm1.weight torch.Size([640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm1.bias torch.Size([640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm2.weight torch.Size([640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm2.bias torch.Size([640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm3.weight torch.Size([640])
model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm3.bias torch.Size([640])
model.diffusion_model.output_blocks.7.1.proj_out.weight torch.Size([640, 640, 1, 1])
model.diffusion_model.output_blocks.7.1.proj_out.bias torch.Size([640])
model.diffusion_model.output_blocks.8.0.in_layers.0.weight torch.Size([960])
model.diffusion_model.output_blocks.8.0.in_layers.0.bias torch.Size([960])
model.diffusion_model.output_blocks.8.0.in_layers.2.weight torch.Size([640, 960, 3, 3])
model.diffusion_model.output_blocks.8.0.in_layers.2.bias torch.Size([640])
model.diffusion_model.output_blocks.8.0.emb_layers.1.weight torch.Size([640, 1280])
model.diffusion_model.output_blocks.8.0.emb_layers.1.bias torch.Size([640])
model.diffusion_model.output_blocks.8.0.out_layers.0.weight torch.Size([640])
model.diffusion_model.output_blocks.8.0.out_layers.0.bias torch.Size([640])
model.diffusion_model.output_blocks.8.0.out_layers.3.weight torch.Size([640, 640, 3, 3])
model.diffusion_model.output_blocks.8.0.out_layers.3.bias torch.Size([640])
model.diffusion_model.output_blocks.8.0.skip_connection.weight torch.Size([640, 960, 1, 1])
model.diffusion_model.output_blocks.8.0.skip_connection.bias torch.Size([640])
model.diffusion_model.output_blocks.8.1.norm.weight torch.Size([640])
model.diffusion_model.output_blocks.8.1.norm.bias torch.Size([640])
model.diffusion_model.output_blocks.8.1.proj_in.weight torch.Size([640, 640, 1, 1])
model.diffusion_model.output_blocks.8.1.proj_in.bias torch.Size([640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_q.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_k.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_v.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([5120, 640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([5120])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.2.weight torch.Size([640, 2560])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.2.bias torch.Size([640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_q.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight torch.Size([640, 768])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight torch.Size([640, 768])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([640, 640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm1.weight torch.Size([640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm1.bias torch.Size([640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm2.weight torch.Size([640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm2.bias torch.Size([640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm3.weight torch.Size([640])
model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm3.bias torch.Size([640])
model.diffusion_model.output_blocks.8.1.proj_out.weight torch.Size([640, 640, 1, 1])
model.diffusion_model.output_blocks.8.1.proj_out.bias torch.Size([640])
model.diffusion_model.output_blocks.8.2.conv.weight torch.Size([640, 640, 3, 3])
model.diffusion_model.output_blocks.8.2.conv.bias torch.Size([640])
model.diffusion_model.output_blocks.9.0.in_layers.0.weight torch.Size([960])
model.diffusion_model.output_blocks.9.0.in_layers.0.bias torch.Size([960])
model.diffusion_model.output_blocks.9.0.in_layers.2.weight torch.Size([320, 960, 3, 3])
model.diffusion_model.output_blocks.9.0.in_layers.2.bias torch.Size([320])
model.diffusion_model.output_blocks.9.0.emb_layers.1.weight torch.Size([320, 1280])
model.diffusion_model.output_blocks.9.0.emb_layers.1.bias torch.Size([320])
model.diffusion_model.output_blocks.9.0.out_layers.0.weight torch.Size([320])
model.diffusion_model.output_blocks.9.0.out_layers.0.bias torch.Size([320])
model.diffusion_model.output_blocks.9.0.out_layers.3.weight torch.Size([320, 320, 3, 3])
model.diffusion_model.output_blocks.9.0.out_layers.3.bias torch.Size([320])
model.diffusion_model.output_blocks.9.0.skip_connection.weight torch.Size([320, 960, 1, 1])
model.diffusion_model.output_blocks.9.0.skip_connection.bias torch.Size([320])
model.diffusion_model.output_blocks.9.1.norm.weight torch.Size([320])
model.diffusion_model.output_blocks.9.1.norm.bias torch.Size([320])
model.diffusion_model.output_blocks.9.1.proj_in.weight torch.Size([320, 320, 1, 1])
model.diffusion_model.output_blocks.9.1.proj_in.bias torch.Size([320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_q.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_k.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_v.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([2560, 320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([2560])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.2.weight torch.Size([320, 1280])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.2.bias torch.Size([320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_q.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight torch.Size([320, 768])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight torch.Size([320, 768])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm1.weight torch.Size([320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm1.bias torch.Size([320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm2.weight torch.Size([320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm2.bias torch.Size([320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm3.weight torch.Size([320])
model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm3.bias torch.Size([320])
model.diffusion_model.output_blocks.9.1.proj_out.weight torch.Size([320, 320, 1, 1])
model.diffusion_model.output_blocks.9.1.proj_out.bias torch.Size([320])
model.diffusion_model.output_blocks.10.0.in_layers.0.weight torch.Size([640])
model.diffusion_model.output_blocks.10.0.in_layers.0.bias torch.Size([640])
model.diffusion_model.output_blocks.10.0.in_layers.2.weight torch.Size([320, 640, 3, 3])
model.diffusion_model.output_blocks.10.0.in_layers.2.bias torch.Size([320])
model.diffusion_model.output_blocks.10.0.emb_layers.1.weight torch.Size([320, 1280])
model.diffusion_model.output_blocks.10.0.emb_layers.1.bias torch.Size([320])
model.diffusion_model.output_blocks.10.0.out_layers.0.weight torch.Size([320])
model.diffusion_model.output_blocks.10.0.out_layers.0.bias torch.Size([320])
model.diffusion_model.output_blocks.10.0.out_layers.3.weight torch.Size([320, 320, 3, 3])
model.diffusion_model.output_blocks.10.0.out_layers.3.bias torch.Size([320])
model.diffusion_model.output_blocks.10.0.skip_connection.weight torch.Size([320, 640, 1, 1])
model.diffusion_model.output_blocks.10.0.skip_connection.bias torch.Size([320])
model.diffusion_model.output_blocks.10.1.norm.weight torch.Size([320])
model.diffusion_model.output_blocks.10.1.norm.bias torch.Size([320])
model.diffusion_model.output_blocks.10.1.proj_in.weight torch.Size([320, 320, 1, 1])
model.diffusion_model.output_blocks.10.1.proj_in.bias torch.Size([320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_q.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_k.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_v.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([2560, 320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([2560])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.2.weight torch.Size([320, 1280])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.2.bias torch.Size([320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_q.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight torch.Size([320, 768])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight torch.Size([320, 768])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm1.weight torch.Size([320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm1.bias torch.Size([320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm2.weight torch.Size([320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm2.bias torch.Size([320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm3.weight torch.Size([320])
model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm3.bias torch.Size([320])
model.diffusion_model.output_blocks.10.1.proj_out.weight torch.Size([320, 320, 1, 1])
model.diffusion_model.output_blocks.10.1.proj_out.bias torch.Size([320])
model.diffusion_model.output_blocks.11.0.in_layers.0.weight torch.Size([640])
model.diffusion_model.output_blocks.11.0.in_layers.0.bias torch.Size([640])
model.diffusion_model.output_blocks.11.0.in_layers.2.weight torch.Size([320, 640, 3, 3])
model.diffusion_model.output_blocks.11.0.in_layers.2.bias torch.Size([320])
model.diffusion_model.output_blocks.11.0.emb_layers.1.weight torch.Size([320, 1280])
model.diffusion_model.output_blocks.11.0.emb_layers.1.bias torch.Size([320])
model.diffusion_model.output_blocks.11.0.out_layers.0.weight torch.Size([320])
model.diffusion_model.output_blocks.11.0.out_layers.0.bias torch.Size([320])
model.diffusion_model.output_blocks.11.0.out_layers.3.weight torch.Size([320, 320, 3, 3])
model.diffusion_model.output_blocks.11.0.out_layers.3.bias torch.Size([320])
model.diffusion_model.output_blocks.11.0.skip_connection.weight torch.Size([320, 640, 1, 1])
model.diffusion_model.output_blocks.11.0.skip_connection.bias torch.Size([320])
model.diffusion_model.output_blocks.11.1.norm.weight torch.Size([320])
model.diffusion_model.output_blocks.11.1.norm.bias torch.Size([320])
model.diffusion_model.output_blocks.11.1.proj_in.weight torch.Size([320, 320, 1, 1])
model.diffusion_model.output_blocks.11.1.proj_in.bias torch.Size([320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_q.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_k.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_v.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_out.0.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_out.0.bias torch.Size([320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.0.proj.weight torch.Size([2560, 320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.0.proj.bias torch.Size([2560])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.2.weight torch.Size([320, 1280])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.2.bias torch.Size([320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_q.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight torch.Size([320, 768])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight torch.Size([320, 768])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_out.0.weight torch.Size([320, 320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_out.0.bias torch.Size([320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm1.weight torch.Size([320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm1.bias torch.Size([320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm2.weight torch.Size([320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm2.bias torch.Size([320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm3.weight torch.Size([320])
model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm3.bias torch.Size([320])
model.diffusion_model.output_blocks.11.1.proj_out.weight torch.Size([320, 320, 1, 1])
model.diffusion_model.output_blocks.11.1.proj_out.bias torch.Size([320])
model.diffusion_model.out.0.weight torch.Size([320])
model.diffusion_model.out.0.bias torch.Size([320])
model.diffusion_model.out.2.weight torch.Size([4, 320, 3, 3])
model.diffusion_model.out.2.bias torch.Size([4])
first_stage_model.encoder.conv_in.weight torch.Size([128, 3, 3, 3])
first_stage_model.encoder.conv_in.bias torch.Size([128])
first_stage_model.encoder.down.0.block.0.norm1.weight torch.Size([128])
first_stage_model.encoder.down.0.block.0.norm1.bias torch.Size([128])
first_stage_model.encoder.down.0.block.0.conv1.weight torch.Size([128, 128, 3, 3])
first_stage_model.encoder.down.0.block.0.conv1.bias torch.Size([128])
first_stage_model.encoder.down.0.block.0.norm2.weight torch.Size([128])
first_stage_model.encoder.down.0.block.0.norm2.bias torch.Size([128])
first_stage_model.encoder.down.0.block.0.conv2.weight torch.Size([128, 128, 3, 3])
first_stage_model.encoder.down.0.block.0.conv2.bias torch.Size([128])
first_stage_model.encoder.down.0.block.1.norm1.weight torch.Size([128])
first_stage_model.encoder.down.0.block.1.norm1.bias torch.Size([128])
first_stage_model.encoder.down.0.block.1.conv1.weight torch.Size([128, 128, 3, 3])
first_stage_model.encoder.down.0.block.1.conv1.bias torch.Size([128])
first_stage_model.encoder.down.0.block.1.norm2.weight torch.Size([128])
first_stage_model.encoder.down.0.block.1.norm2.bias torch.Size([128])
first_stage_model.encoder.down.0.block.1.conv2.weight torch.Size([128, 128, 3, 3])
first_stage_model.encoder.down.0.block.1.conv2.bias torch.Size([128])
first_stage_model.encoder.down.0.downsample.conv.weight torch.Size([128, 128, 3, 3])
first_stage_model.encoder.down.0.downsample.conv.bias torch.Size([128])
first_stage_model.encoder.down.1.block.0.norm1.weight torch.Size([128])
first_stage_model.encoder.down.1.block.0.norm1.bias torch.Size([128])
first_stage_model.encoder.down.1.block.0.conv1.weight torch.Size([256, 128, 3, 3])
first_stage_model.encoder.down.1.block.0.conv1.bias torch.Size([256])
first_stage_model.encoder.down.1.block.0.norm2.weight torch.Size([256])
first_stage_model.encoder.down.1.block.0.norm2.bias torch.Size([256])
first_stage_model.encoder.down.1.block.0.conv2.weight torch.Size([256, 256, 3, 3])
first_stage_model.encoder.down.1.block.0.conv2.bias torch.Size([256])
first_stage_model.encoder.down.1.block.0.nin_shortcut.weight torch.Size([256, 128, 1, 1])
first_stage_model.encoder.down.1.block.0.nin_shortcut.bias torch.Size([256])
first_stage_model.encoder.down.1.block.1.norm1.weight torch.Size([256])
first_stage_model.encoder.down.1.block.1.norm1.bias torch.Size([256])
first_stage_model.encoder.down.1.block.1.conv1.weight torch.Size([256, 256, 3, 3])
first_stage_model.encoder.down.1.block.1.conv1.bias torch.Size([256])
first_stage_model.encoder.down.1.block.1.norm2.weight torch.Size([256])
first_stage_model.encoder.down.1.block.1.norm2.bias torch.Size([256])
first_stage_model.encoder.down.1.block.1.conv2.weight torch.Size([256, 256, 3, 3])
first_stage_model.encoder.down.1.block.1.conv2.bias torch.Size([256])
first_stage_model.encoder.down.1.downsample.conv.weight torch.Size([256, 256, 3, 3])
first_stage_model.encoder.down.1.downsample.conv.bias torch.Size([256])
first_stage_model.encoder.down.2.block.0.norm1.weight torch.Size([256])
first_stage_model.encoder.down.2.block.0.norm1.bias torch.Size([256])
first_stage_model.encoder.down.2.block.0.conv1.weight torch.Size([512, 256, 3, 3])
first_stage_model.encoder.down.2.block.0.conv1.bias torch.Size([512])
first_stage_model.encoder.down.2.block.0.norm2.weight torch.Size([512])
first_stage_model.encoder.down.2.block.0.norm2.bias torch.Size([512])
first_stage_model.encoder.down.2.block.0.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.down.2.block.0.conv2.bias torch.Size([512])
first_stage_model.encoder.down.2.block.0.nin_shortcut.weight torch.Size([512, 256, 1, 1])
first_stage_model.encoder.down.2.block.0.nin_shortcut.bias torch.Size([512])
first_stage_model.encoder.down.2.block.1.norm1.weight torch.Size([512])
first_stage_model.encoder.down.2.block.1.norm1.bias torch.Size([512])
first_stage_model.encoder.down.2.block.1.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.down.2.block.1.conv1.bias torch.Size([512])
first_stage_model.encoder.down.2.block.1.norm2.weight torch.Size([512])
first_stage_model.encoder.down.2.block.1.norm2.bias torch.Size([512])
first_stage_model.encoder.down.2.block.1.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.down.2.block.1.conv2.bias torch.Size([512])
first_stage_model.encoder.down.2.downsample.conv.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.down.2.downsample.conv.bias torch.Size([512])
first_stage_model.encoder.down.3.block.0.norm1.weight torch.Size([512])
first_stage_model.encoder.down.3.block.0.norm1.bias torch.Size([512])
first_stage_model.encoder.down.3.block.0.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.down.3.block.0.conv1.bias torch.Size([512])
first_stage_model.encoder.down.3.block.0.norm2.weight torch.Size([512])
first_stage_model.encoder.down.3.block.0.norm2.bias torch.Size([512])
first_stage_model.encoder.down.3.block.0.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.down.3.block.0.conv2.bias torch.Size([512])
first_stage_model.encoder.down.3.block.1.norm1.weight torch.Size([512])
first_stage_model.encoder.down.3.block.1.norm1.bias torch.Size([512])
first_stage_model.encoder.down.3.block.1.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.down.3.block.1.conv1.bias torch.Size([512])
first_stage_model.encoder.down.3.block.1.norm2.weight torch.Size([512])
first_stage_model.encoder.down.3.block.1.norm2.bias torch.Size([512])
first_stage_model.encoder.down.3.block.1.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.down.3.block.1.conv2.bias torch.Size([512])
first_stage_model.encoder.mid.block_1.norm1.weight torch.Size([512])
first_stage_model.encoder.mid.block_1.norm1.bias torch.Size([512])
first_stage_model.encoder.mid.block_1.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.mid.block_1.conv1.bias torch.Size([512])
first_stage_model.encoder.mid.block_1.norm2.weight torch.Size([512])
first_stage_model.encoder.mid.block_1.norm2.bias torch.Size([512])
first_stage_model.encoder.mid.block_1.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.mid.block_1.conv2.bias torch.Size([512])
first_stage_model.encoder.mid.attn_1.norm.weight torch.Size([512])
first_stage_model.encoder.mid.attn_1.norm.bias torch.Size([512])
first_stage_model.encoder.mid.attn_1.q.weight torch.Size([512, 512, 1, 1])
first_stage_model.encoder.mid.attn_1.q.bias torch.Size([512])
first_stage_model.encoder.mid.attn_1.k.weight torch.Size([512, 512, 1, 1])
first_stage_model.encoder.mid.attn_1.k.bias torch.Size([512])
first_stage_model.encoder.mid.attn_1.v.weight torch.Size([512, 512, 1, 1])
first_stage_model.encoder.mid.attn_1.v.bias torch.Size([512])
first_stage_model.encoder.mid.attn_1.proj_out.weight torch.Size([512, 512, 1, 1])
first_stage_model.encoder.mid.attn_1.proj_out.bias torch.Size([512])
first_stage_model.encoder.mid.block_2.norm1.weight torch.Size([512])
first_stage_model.encoder.mid.block_2.norm1.bias torch.Size([512])
first_stage_model.encoder.mid.block_2.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.mid.block_2.conv1.bias torch.Size([512])
first_stage_model.encoder.mid.block_2.norm2.weight torch.Size([512])
first_stage_model.encoder.mid.block_2.norm2.bias torch.Size([512])
first_stage_model.encoder.mid.block_2.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.encoder.mid.block_2.conv2.bias torch.Size([512])
first_stage_model.encoder.norm_out.weight torch.Size([512])
first_stage_model.encoder.norm_out.bias torch.Size([512])
first_stage_model.encoder.conv_out.weight torch.Size([8, 512, 3, 3])
first_stage_model.encoder.conv_out.bias torch.Size([8])
first_stage_model.decoder.conv_in.weight torch.Size([512, 4, 3, 3])
first_stage_model.decoder.conv_in.bias torch.Size([512])
first_stage_model.decoder.mid.block_1.norm1.weight torch.Size([512])
first_stage_model.decoder.mid.block_1.norm1.bias torch.Size([512])
first_stage_model.decoder.mid.block_1.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.mid.block_1.conv1.bias torch.Size([512])
first_stage_model.decoder.mid.block_1.norm2.weight torch.Size([512])
first_stage_model.decoder.mid.block_1.norm2.bias torch.Size([512])
first_stage_model.decoder.mid.block_1.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.mid.block_1.conv2.bias torch.Size([512])
first_stage_model.decoder.mid.attn_1.norm.weight torch.Size([512])
first_stage_model.decoder.mid.attn_1.norm.bias torch.Size([512])
first_stage_model.decoder.mid.attn_1.q.weight torch.Size([512, 512, 1, 1])
first_stage_model.decoder.mid.attn_1.q.bias torch.Size([512])
first_stage_model.decoder.mid.attn_1.k.weight torch.Size([512, 512, 1, 1])
first_stage_model.decoder.mid.attn_1.k.bias torch.Size([512])
first_stage_model.decoder.mid.attn_1.v.weight torch.Size([512, 512, 1, 1])
first_stage_model.decoder.mid.attn_1.v.bias torch.Size([512])
first_stage_model.decoder.mid.attn_1.proj_out.weight torch.Size([512, 512, 1, 1])
first_stage_model.decoder.mid.attn_1.proj_out.bias torch.Size([512])
first_stage_model.decoder.mid.block_2.norm1.weight torch.Size([512])
first_stage_model.decoder.mid.block_2.norm1.bias torch.Size([512])
first_stage_model.decoder.mid.block_2.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.mid.block_2.conv1.bias torch.Size([512])
first_stage_model.decoder.mid.block_2.norm2.weight torch.Size([512])
first_stage_model.decoder.mid.block_2.norm2.bias torch.Size([512])
first_stage_model.decoder.mid.block_2.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.mid.block_2.conv2.bias torch.Size([512])
first_stage_model.decoder.up.0.block.0.norm1.weight torch.Size([256])
first_stage_model.decoder.up.0.block.0.norm1.bias torch.Size([256])
first_stage_model.decoder.up.0.block.0.conv1.weight torch.Size([128, 256, 3, 3])
first_stage_model.decoder.up.0.block.0.conv1.bias torch.Size([128])
first_stage_model.decoder.up.0.block.0.norm2.weight torch.Size([128])
first_stage_model.decoder.up.0.block.0.norm2.bias torch.Size([128])
first_stage_model.decoder.up.0.block.0.conv2.weight torch.Size([128, 128, 3, 3])
first_stage_model.decoder.up.0.block.0.conv2.bias torch.Size([128])
first_stage_model.decoder.up.0.block.0.nin_shortcut.weight torch.Size([128, 256, 1, 1])
first_stage_model.decoder.up.0.block.0.nin_shortcut.bias torch.Size([128])
first_stage_model.decoder.up.0.block.1.norm1.weight torch.Size([128])
first_stage_model.decoder.up.0.block.1.norm1.bias torch.Size([128])
first_stage_model.decoder.up.0.block.1.conv1.weight torch.Size([128, 128, 3, 3])
first_stage_model.decoder.up.0.block.1.conv1.bias torch.Size([128])
first_stage_model.decoder.up.0.block.1.norm2.weight torch.Size([128])
first_stage_model.decoder.up.0.block.1.norm2.bias torch.Size([128])
first_stage_model.decoder.up.0.block.1.conv2.weight torch.Size([128, 128, 3, 3])
first_stage_model.decoder.up.0.block.1.conv2.bias torch.Size([128])
first_stage_model.decoder.up.0.block.2.norm1.weight torch.Size([128])
first_stage_model.decoder.up.0.block.2.norm1.bias torch.Size([128])
first_stage_model.decoder.up.0.block.2.conv1.weight torch.Size([128, 128, 3, 3])
first_stage_model.decoder.up.0.block.2.conv1.bias torch.Size([128])
first_stage_model.decoder.up.0.block.2.norm2.weight torch.Size([128])
first_stage_model.decoder.up.0.block.2.norm2.bias torch.Size([128])
first_stage_model.decoder.up.0.block.2.conv2.weight torch.Size([128, 128, 3, 3])
first_stage_model.decoder.up.0.block.2.conv2.bias torch.Size([128])
first_stage_model.decoder.up.1.block.0.norm1.weight torch.Size([512])
first_stage_model.decoder.up.1.block.0.norm1.bias torch.Size([512])
first_stage_model.decoder.up.1.block.0.conv1.weight torch.Size([256, 512, 3, 3])
first_stage_model.decoder.up.1.block.0.conv1.bias torch.Size([256])
first_stage_model.decoder.up.1.block.0.norm2.weight torch.Size([256])
first_stage_model.decoder.up.1.block.0.norm2.bias torch.Size([256])
first_stage_model.decoder.up.1.block.0.conv2.weight torch.Size([256, 256, 3, 3])
first_stage_model.decoder.up.1.block.0.conv2.bias torch.Size([256])
first_stage_model.decoder.up.1.block.0.nin_shortcut.weight torch.Size([256, 512, 1, 1])
first_stage_model.decoder.up.1.block.0.nin_shortcut.bias torch.Size([256])
first_stage_model.decoder.up.1.block.1.norm1.weight torch.Size([256])
first_stage_model.decoder.up.1.block.1.norm1.bias torch.Size([256])
first_stage_model.decoder.up.1.block.1.conv1.weight torch.Size([256, 256, 3, 3])
first_stage_model.decoder.up.1.block.1.conv1.bias torch.Size([256])
first_stage_model.decoder.up.1.block.1.norm2.weight torch.Size([256])
first_stage_model.decoder.up.1.block.1.norm2.bias torch.Size([256])
first_stage_model.decoder.up.1.block.1.conv2.weight torch.Size([256, 256, 3, 3])
first_stage_model.decoder.up.1.block.1.conv2.bias torch.Size([256])
first_stage_model.decoder.up.1.block.2.norm1.weight torch.Size([256])
first_stage_model.decoder.up.1.block.2.norm1.bias torch.Size([256])
first_stage_model.decoder.up.1.block.2.conv1.weight torch.Size([256, 256, 3, 3])
first_stage_model.decoder.up.1.block.2.conv1.bias torch.Size([256])
first_stage_model.decoder.up.1.block.2.norm2.weight torch.Size([256])
first_stage_model.decoder.up.1.block.2.norm2.bias torch.Size([256])
first_stage_model.decoder.up.1.block.2.conv2.weight torch.Size([256, 256, 3, 3])
first_stage_model.decoder.up.1.block.2.conv2.bias torch.Size([256])
first_stage_model.decoder.up.1.upsample.conv.weight torch.Size([256, 256, 3, 3])
first_stage_model.decoder.up.1.upsample.conv.bias torch.Size([256])
first_stage_model.decoder.up.2.block.0.norm1.weight torch.Size([512])
first_stage_model.decoder.up.2.block.0.norm1.bias torch.Size([512])
first_stage_model.decoder.up.2.block.0.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.2.block.0.conv1.bias torch.Size([512])
first_stage_model.decoder.up.2.block.0.norm2.weight torch.Size([512])
first_stage_model.decoder.up.2.block.0.norm2.bias torch.Size([512])
first_stage_model.decoder.up.2.block.0.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.2.block.0.conv2.bias torch.Size([512])
first_stage_model.decoder.up.2.block.1.norm1.weight torch.Size([512])
first_stage_model.decoder.up.2.block.1.norm1.bias torch.Size([512])
first_stage_model.decoder.up.2.block.1.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.2.block.1.conv1.bias torch.Size([512])
first_stage_model.decoder.up.2.block.1.norm2.weight torch.Size([512])
first_stage_model.decoder.up.2.block.1.norm2.bias torch.Size([512])
first_stage_model.decoder.up.2.block.1.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.2.block.1.conv2.bias torch.Size([512])
first_stage_model.decoder.up.2.block.2.norm1.weight torch.Size([512])
first_stage_model.decoder.up.2.block.2.norm1.bias torch.Size([512])
first_stage_model.decoder.up.2.block.2.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.2.block.2.conv1.bias torch.Size([512])
first_stage_model.decoder.up.2.block.2.norm2.weight torch.Size([512])
first_stage_model.decoder.up.2.block.2.norm2.bias torch.Size([512])
first_stage_model.decoder.up.2.block.2.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.2.block.2.conv2.bias torch.Size([512])
first_stage_model.decoder.up.2.upsample.conv.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.2.upsample.conv.bias torch.Size([512])
first_stage_model.decoder.up.3.block.0.norm1.weight torch.Size([512])
first_stage_model.decoder.up.3.block.0.norm1.bias torch.Size([512])
first_stage_model.decoder.up.3.block.0.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.3.block.0.conv1.bias torch.Size([512])
first_stage_model.decoder.up.3.block.0.norm2.weight torch.Size([512])
first_stage_model.decoder.up.3.block.0.norm2.bias torch.Size([512])
first_stage_model.decoder.up.3.block.0.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.3.block.0.conv2.bias torch.Size([512])
first_stage_model.decoder.up.3.block.1.norm1.weight torch.Size([512])
first_stage_model.decoder.up.3.block.1.norm1.bias torch.Size([512])
first_stage_model.decoder.up.3.block.1.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.3.block.1.conv1.bias torch.Size([512])
first_stage_model.decoder.up.3.block.1.norm2.weight torch.Size([512])
first_stage_model.decoder.up.3.block.1.norm2.bias torch.Size([512])
first_stage_model.decoder.up.3.block.1.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.3.block.1.conv2.bias torch.Size([512])
first_stage_model.decoder.up.3.block.2.norm1.weight torch.Size([512])
first_stage_model.decoder.up.3.block.2.norm1.bias torch.Size([512])
first_stage_model.decoder.up.3.block.2.conv1.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.3.block.2.conv1.bias torch.Size([512])
first_stage_model.decoder.up.3.block.2.norm2.weight torch.Size([512])
first_stage_model.decoder.up.3.block.2.norm2.bias torch.Size([512])
first_stage_model.decoder.up.3.block.2.conv2.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.3.block.2.conv2.bias torch.Size([512])
first_stage_model.decoder.up.3.upsample.conv.weight torch.Size([512, 512, 3, 3])
first_stage_model.decoder.up.3.upsample.conv.bias torch.Size([512])
first_stage_model.decoder.norm_out.weight torch.Size([128])
first_stage_model.decoder.norm_out.bias torch.Size([128])
first_stage_model.decoder.conv_out.weight torch.Size([3, 128, 3, 3])
first_stage_model.decoder.conv_out.bias torch.Size([3])
first_stage_model.quant_conv.weight torch.Size([8, 8, 1, 1])
first_stage_model.quant_conv.bias torch.Size([8])
first_stage_model.post_quant_conv.weight torch.Size([4, 4, 1, 1])
first_stage_model.post_quant_conv.bias torch.Size([4])
cond_stage_model.transformer.text_model.embeddings.token_embedding.weight torch.Size([49408, 768])
cond_stage_model.transformer.text_model.embeddings.position_embedding.weight torch.Size([77, 768])
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.k_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.k_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight torch.Size([768, 768])
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc1.weight torch.Size([3072, 768])
cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc1.bias torch.Size([3072])
cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc2.weight torch.Size([768, 3072])
cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc2.bias torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm2.weight torch.Size([768])
cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm2.bias torch.Size([768])
cond_stage_model.transformer.text_model.final_layer_norm.weight torch.Size([768])
cond_stage_model.transformer.text_model.final_layer_norm.bias torch.Size([768])