Washakh commited on
Commit
6b20050
·
verified ·
1 Parent(s): 14f70d7

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ gguf/base-v7-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
37
+ gguf/base-v7-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
38
+ gguf/comparison.png filter=lfs diff=lfs merge=lfs -text
39
+ workflows/pony-v7-simple.png filter=lfs diff=lfs merge=lfs -text
40
+ workflows/pony-v7-simple-gguf.png filter=lfs diff=lfs merge=lfs -text
41
+ workflows/pony-v7-noise-selection.png filter=lfs diff=lfs merge=lfs -text
42
+ workflows/pony-v7-lora.png filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
File without changes
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: pony-license
4
+ license_link: LICENSE
5
+ ---
6
+
7
+ # Pony V7
8
+
9
+ ![Pony V7](V7.webp)
10
+
11
+ Pony V7 is a versatile character generation model based on AuraFlow architecture. It supports a wide range of styles and species types (humanoid, anthro, feral, and more) and handles character interactions through natural language prompts.
12
+
13
+ ## Fictional
14
+
15
+ First, let me introduce [Fictional](https://fictional.ai) - our multimodal platform where AI Characters come alive through text, images, voice, and (soon) video. Powered by PonyV7, V6, Chroma, Seedream 4, and other advanced models, Fictional lets you discover, create, and interact with characters who live their own lives and share their own stories.
16
+
17
+ Fictional is also what enables the development of models like V7, so if you're excited about the future of multimodal AI characters, please download Fictional on iOS or Android and help shape our future!
18
+
19
+ - **iOS**: https://apps.apple.com/us/app/fictional/id6739802573
20
+ - **Android**: https://play.google.com/store/apps/details?id=ai.fictional.app
21
+
22
+ ### Get in touch with us
23
+
24
+ Please join [our Discord Server](https://discord.gg/pYsdjMfu3q) if you have questions about Fictional and Pony models.
25
+
26
+ ## Important model information
27
+
28
+ Please check [this article](https://civitai.com/articles/19986) to learn more about why it took so long for us to ship V7 and upcoming model releases.
29
+
30
+ ## Important HuggingFace links
31
+
32
+ - **[GGUF Models](gguf/README.md)** - Quantized models for lower VRAM usage (Q8_0 recommended for best quality/size balance)
33
+ - **[Safetensor Model](safetensor/README.md)** - Single-file safetensors format for easier loading
34
+ - **[LoRA Training](lora/README.md)** - Information and tools for training LoRAs with SimpleTuner
35
+ - **[Workflows](workflows/README.md)** - ComfyUI workflow examples for standard and GGUF inference
36
+ - **[ComfyUI Nodes](comfy_nodes/README.md)** - Custom PonyNoise node for GPU/CPU noise selection
37
+
38
+
39
+ ## Model prompting
40
+
41
+ This model supports a wide array of styles and aesthetics but provides an opinionated default prompt template:
42
+
43
+ ```
44
+ special tags, factual description of image, stylistic description of image, additional content tags
45
+ ```
46
+
47
+ ### Special Tags
48
+
49
+ `score_X`, `style_cluster_x`, `source_X` - warning: V7 prompting may be inconsistent, please see the article as we are working on V7.1 to address this.
50
+
51
+ ### Factual description of image
52
+
53
+ Description of what is portrayed in the image without any stylistic indicators. Two recommendations:
54
+
55
+ 1. Start with a single phrase describing what you want in the image before going into details
56
+
57
+ 2. When referring to characters use pattern: `<species> <gender> <name> from <source>`
58
+
59
+ For example "Anthro bunny female Lola Bunny from Space Jam".
60
+
61
+ This model is capable of recognizing many popular and obscure characters and series.
62
+
63
+ ### Stylistic description of image
64
+
65
+ Any information about image medium, shot type, lighting, etc. (More info TBD with captioning Colab)
66
+
67
+ ### Tags
68
+
69
+ V7 is trained on a combination of natural language prompts and tags and is capable of understanding both, so describing the intended result using normal language works in most cases, although you can add some tags after the main prompt to boost them.
70
+
71
+ ### Captioning Colab
72
+
73
+ To get a better understanding of V7 prompting, we are releasing a [captioning Colab](https://colab.research.google.com/drive/19PG-0ltob8EynxUZSwOdjMFmqyJ7ZOCB) with all the models used for V7 captioning.
74
+
75
+ ## Supported inference settings
76
+
77
+ V7 supports resolutions in the range of 768px to 1536px. It is recommended to go for higher resolutions and at least 30 steps during inference.
78
+
79
+ ## Highlights compared to V6
80
+
81
+ - Much stronger understanding of prompts, especially when it comes to spatial information and multiple characters
82
+ - Much stronger background support - both generation of backgrounds and using background with character
83
+ - Much stronger realism support out of the box
84
+ - Ability to generate very dark and very light images
85
+ - Resolution up to 1536x1536 pixels
86
+ - Expanded character recognition (some V6 characters may get less recognized, but generally we extended the knowledge by a lot)
87
+
88
+ ## Special thanks
89
+
90
+ - Iceman for helping to procure necessary training resources
91
+ - [Simo Ryu](https://x.com/cloneofsimo) and the rest of FAL.ai team for creating AuraFlow and emotional support
92
+ - [Runpod for providing captioning compute](https://runpod.io/?utm_source=purplesmartai)
93
+ - [Piclumen](https://www.piclumen.com/) for being our partners
94
+ - [City96](https://github.com/city96) for help with GGUF support
95
+ - [diffusers](https://huggingface.co/docs/diffusers/en/index) team for supporting AuraFlow integration work
96
+ - PSAI Server Subscribers for supporting the project costs
97
+ - PSAI Server Moderators for being vigilant and managing the community
98
+ - Many supporters that decided to remain anonymous but their help has been critical for getting V7 done
99
+
100
+ ## Technical details
101
+
102
+ The model has been trained on ~10M images aesthetically ranked and selected from a superset of over 30M images with roughly 1:1 ratio between anime/cartoon/furry/pony datasets and 1:1 ratio between safe/questionable/explicit ratings. 100% of all images have been tagged and captioned with high quality detailed captions.
103
+
104
+ All images have been used in training with both captions and tags. Artists' names have been removed and source data has been filtered based on our Opt-in/Opt-out program. Any inappropriate explicit content has been filtered out.
105
+
106
+ ## Limitations
107
+
108
+ - This model does not support text generation and has degraded text generation capabilities compared to base AuraFlow
109
+ - Special tags (including quality tags) have much weaker performance compared to V6, meaning score_9 would not necessarily yield better results on some prompts. We are working on a V7.1 follow-up to improve this
110
+ - Small details and especially faces may degrade significantly depending on art style, this is a combination of outdated VAE and insufficient training which we are trying to improve in V7.1
111
+
112
+ ## LoRA training
113
+
114
+ We recommend using SimpleTuner for LoRA training following [this guide](https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/AURAFLOW.md).
115
+
116
+ For information on converting SimpleTuner LoRAs to diffusers/ComfyUI compatible format, see the [LoRA folder](lora/). A [LoRA workflow example](workflows/pony-v7-lora.png) is also available.
117
+
118
+
119
+ ## Commercial API
120
+
121
+ We provide [commercial API](https://fal.ai/models/fal-ai/pony-v7) via our exclusive partner FAL.ai
122
+
123
+ ## License
124
+
125
+ This model is licensed under a Pony License
126
+
127
+ In short, you can use this model and its outputs commercially unless you provide an inference service or application, have a company with over 1M revenue or use in professional video production. This limitations do not apply if you use first party commercial APIs.
128
+
129
+ If you want to use this model commercially, please reach us at [email protected].
130
+
131
+ Explicit permission for commercial inference has been granted to CivitAi and Hugging Face.
V7.webp ADDED
comfy_nodes/ComfyUI_PonyNoise.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72d38d47ae4509d87a55ed21aec01c33fd537013b0a68f70c2fbc26cf7640fc7
3
+ size 1165
comfy_nodes/README.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ComfyUI PonyNoise Node
2
+
3
+ ## Download
4
+
5
+ 📦 [ComfyUI_PonyNoise.zip](ComfyUI_PonyNoise.zip)
6
+
7
+ ## Overview
8
+
9
+ ComfyUI uses CPU noise by default to ensure consistency across different platforms, while `diffusers` use GPU noise by default. However, GPU noise generation may vary between different GPU models, although it typically remains consistent across the latest NVIDIA cards.
10
+
11
+ This custom node provides the flexibility to switch between GPU and CPU noise generation modes, enabling you to match `diffusers` output on the same machine when needed.
12
+
13
+ ## Usage
14
+
15
+ To get started with the PonyNoise node, please refer to the [noise selection workflow](../workflows/pony-v7-noise-selection.png) which demonstrates proper configuration and integration with your generation pipeline.
16
+
17
+ ## Installation
18
+
19
+ 1. Download the [ComfyUI_PonyNoise.zip](ComfyUI_PonyNoise.zip) file
20
+ 2. Extract the contents to your ComfyUI custom nodes directory
21
+ 3. Restart ComfyUI
22
+ 4. Load the workflow
23
+
24
+ ## Acknowledgments
25
+
26
+ Special thanks to [Silver](https://github.com/silveroxides) for adapting the [ComfyUI Noise nodes](https://github.com/BlenderNeko/ComfyUI_Noise) and helping with workflows.
gguf/README.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pony V7 GGUFs
2
+
3
+ Install the ComfyUI [GGUF nodes by City96](https://github.com/city96/ComfyUI-GGUF) before using these workflows.
4
+
5
+ We recommend Q8_0 for the best balance between quality and file size.
6
+
7
+ | VARIANT | MAX VRAM |
8
+ |---------|------|
9
+ | F16 | ~16GB |
10
+ | [Q8_0](base-v7-Q8_0.gguf) | ~10GB |
11
+ | Q6_K | ~8GB |
12
+ | Q5_K_M | ~7GB |
13
+ | [Q4_0](base-v7-Q4_0.gguf) | ~6.5GB |
14
+ | Q4_K_M | ~6.5GB |
15
+ | Q3_K_L | ~6GB |
16
+ | Q3_K_S | ~6GB |
17
+ | Q2_K | ~5GB |
18
+
19
+ Download ComfyUI workflow for GGUF [here](../workflows/poly-v7-gguf.png).
20
+
21
+ ![GGUF comparison](comparison.png)
22
+
gguf/base-v7-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:091f36054915e573f64ca44d05eca27c67a1d758ea12b456d9cae47e6298cf68
3
+ size 3949749536
gguf/base-v7-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50bb230f0a198a7e20dea9f19a21d0a9fb3b49e7833c39c2f5334adb4cfcb912
3
+ size 7347135776
gguf/comparison.png ADDED

Git LFS Details

  • SHA256: f027fd12058d49fa7118e3205b728997582b4229ada446eacb78b7fe31749fbc
  • Pointer size: 132 Bytes
  • Size of remote file: 9.96 MB
lora/README.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pony V7 LoRA Training
2
+
3
+ ## Training Guide
4
+
5
+ We recommend using SimpleTuner for LoRA training following [this guide](https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/AURAFLOW.md).
6
+
7
+ ## ComfyUI LoRA Workflow
8
+
9
+ A [LoRA workflow example](../workflows/pony-v7-lora.png) is available showing how to load and use LoRAs with Pony V7. Simply drag and drop the workflow image into your ComfyUI canvas to load it.
10
+
11
+ ## LoRA Conversion Script
12
+
13
+ ### [convert_simpletuner_lora.py](convert_simpletuner_lora.py)
14
+
15
+ A utility script to convert SimpleTuner LoRA weights to diffusers-compatible format for AuraFlow models.
16
+
17
+ **Usage:**
18
+ ```bash
19
+ python convert_simpletuner_lora.py <input_lora.safetensors> <output_lora.safetensors>
20
+ ```
21
+
22
+
23
+ This script ensures your LoRAs trained with SimpleTuner can be loaded directly with diffusers' `load_lora_weights()` method or inside of ComfyUI's LoRA nodes.
24
+
lora/convert_simpletuner_lora.py ADDED
@@ -0,0 +1,483 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Convert SimpleTuner LoRA weights to diffusers-compatible format for AuraFlow.
4
+
5
+ This script converts LoRA weights saved by SimpleTuner into a format that can be
6
+ directly loaded by diffusers' load_lora_weights() method.
7
+
8
+ Usage:
9
+ python convert_simpletuner_lora.py <input_lora.safetensors> <output_lora.safetensors>
10
+
11
+ Example:
12
+ python convert_simpletuner_lora.py input_lora.safetensors diffusers_compatible_lora.safetensors
13
+ """
14
+
15
+ import argparse
16
+ import sys
17
+ from pathlib import Path
18
+ from typing import Dict
19
+
20
+ import safetensors.torch
21
+ import torch
22
+
23
+
24
+ def detect_lora_format(state_dict: Dict[str, torch.Tensor]) -> str:
25
+ """
26
+ Detect the format of the LoRA state dict.
27
+
28
+ Returns:
29
+ "peft" if already in PEFT/diffusers format
30
+ "mixed" if mixed format (some lora_A/B, some lora.down/up)
31
+ "simpletuner_transformer" if in SimpleTuner format with transformer prefix
32
+ "simpletuner_auraflow" if in SimpleTuner AuraFlow format
33
+ "kohya" if in Kohya format
34
+ "unknown" otherwise
35
+ """
36
+ keys = list(state_dict.keys())
37
+
38
+ # Check the actual weight naming convention (lora_A/lora_B vs lora_down/lora_up)
39
+ has_lora_a_b = any((".lora_A." in k or ".lora_B." in k) for k in keys)
40
+ has_lora_down_up = any((".lora_down." in k or ".lora_up." in k) for k in keys)
41
+ has_lora_dot_down_up = any((".lora.down." in k or ".lora.up." in k) for k in keys)
42
+
43
+ # Check prefixes
44
+ has_transformer_prefix = any(k.startswith("transformer.") for k in keys)
45
+ has_lora_transformer_prefix = any(k.startswith("lora_transformer_") for k in keys)
46
+ has_lora_unet_prefix = any(k.startswith("lora_unet_") for k in keys)
47
+
48
+ # Mixed format: has both lora_A/B AND lora.down/up (SimpleTuner hybrid)
49
+ if has_transformer_prefix and has_lora_a_b and (has_lora_down_up or has_lora_dot_down_up):
50
+ return "mixed"
51
+
52
+ # Pure PEFT format: transformer.* with ONLY lora_A/lora_B
53
+ if has_transformer_prefix and has_lora_a_b and not has_lora_down_up and not has_lora_dot_down_up:
54
+ return "peft"
55
+
56
+ # SimpleTuner with transformer prefix but old naming: transformer.* with lora_down/lora_up
57
+ if has_transformer_prefix and (has_lora_down_up or has_lora_dot_down_up):
58
+ return "simpletuner_transformer"
59
+
60
+ # SimpleTuner AuraFlow format: lora_transformer_* with lora_down/lora_up
61
+ if has_lora_transformer_prefix and has_lora_down_up:
62
+ return "simpletuner_auraflow"
63
+
64
+ # Traditional Kohya format: lora_unet_* with lora_down/lora_up
65
+ if has_lora_unet_prefix and has_lora_down_up:
66
+ return "kohya"
67
+
68
+ return "unknown"
69
+
70
+
71
+ def convert_mixed_lora_to_diffusers(state_dict: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:
72
+ """
73
+ Convert mixed LoRA format to pure PEFT format.
74
+
75
+ SimpleTuner sometimes saves a hybrid format where some layers use lora_A/lora_B
76
+ and others use .lora.down./.lora.up. This converts all to lora_A/lora_B.
77
+ """
78
+ new_state_dict = {}
79
+ converted_count = 0
80
+ kept_count = 0
81
+ skipped_count = 0
82
+ renames = []
83
+
84
+ # Get all keys
85
+ all_keys = sorted(state_dict.keys())
86
+
87
+ print("\nProcessing keys:")
88
+ print("-" * 80)
89
+
90
+ for key in all_keys:
91
+ # Already in correct format (lora_A or lora_B)
92
+ if ".lora_A." in key or ".lora_B." in key:
93
+ new_state_dict[key] = state_dict[key]
94
+ kept_count += 1
95
+
96
+ # Needs conversion: .lora.down. -> .lora_A.
97
+ elif ".lora.down.weight" in key:
98
+ new_key = key.replace(".lora.down.weight", ".lora_A.weight")
99
+ new_state_dict[new_key] = state_dict[key]
100
+ renames.append((key, new_key))
101
+ converted_count += 1
102
+
103
+ # Needs conversion: .lora.up. -> .lora_B.
104
+ elif ".lora.up.weight" in key:
105
+ new_key = key.replace(".lora.up.weight", ".lora_B.weight")
106
+ new_state_dict[new_key] = state_dict[key]
107
+ renames.append((key, new_key))
108
+ converted_count += 1
109
+
110
+ # Skip alpha keys (not used in PEFT format)
111
+ elif ".alpha" in key:
112
+ skipped_count += 1
113
+ continue
114
+
115
+ # Other keys (shouldn't happen, but keep them just in case)
116
+ else:
117
+ new_state_dict[key] = state_dict[key]
118
+ print(f"⚠ Warning: Unexpected key format: {key}")
119
+
120
+ print(f"\nSummary:")
121
+ print(f" ✓ Kept {kept_count} keys already in correct format (lora_A/lora_B)")
122
+ print(f" ✓ Converted {converted_count} keys from .lora.down/.lora.up to lora_A/lora_B")
123
+ print(f" ✓ Skipped {skipped_count} alpha keys")
124
+
125
+ if renames:
126
+ print(f"\nRenames applied ({len(renames)} conversions):")
127
+ print("-" * 80)
128
+ for old_key, new_key in renames:
129
+ # Show the difference more clearly
130
+ if ".lora.down.weight" in old_key:
131
+ layer = old_key.replace(".lora.down.weight", "")
132
+ print(f" {layer}")
133
+ print(f" .lora.down.weight → .lora_A.weight")
134
+ elif ".lora.up.weight" in old_key:
135
+ layer = old_key.replace(".lora.up.weight", "")
136
+ print(f" {layer}")
137
+ print(f" .lora.up.weight → .lora_B.weight")
138
+
139
+ return new_state_dict
140
+
141
+
142
+ def convert_simpletuner_transformer_to_diffusers(state_dict: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:
143
+ """
144
+ Convert SimpleTuner transformer format (already has transformer. prefix but uses lora_down/lora_up)
145
+ to diffusers PEFT format (transformer. prefix with lora_A/lora_B).
146
+
147
+ This is a simpler conversion since the key structure is already correct.
148
+ """
149
+ new_state_dict = {}
150
+ renames = []
151
+
152
+ # Get all unique LoRA layer base names (without .lora_down/.lora_up/.alpha suffix)
153
+ all_keys = list(state_dict.keys())
154
+ base_keys = set()
155
+
156
+ for key in all_keys:
157
+ if ".lora_down.weight" in key:
158
+ base_key = key.replace(".lora_down.weight", "")
159
+ base_keys.add(base_key)
160
+
161
+ print(f"\nFound {len(base_keys)} LoRA layers to convert")
162
+ print("-" * 80)
163
+
164
+ # Convert each layer
165
+ for base_key in sorted(base_keys):
166
+ down_key = f"{base_key}.lora_down.weight"
167
+ up_key = f"{base_key}.lora_up.weight"
168
+ alpha_key = f"{base_key}.alpha"
169
+
170
+ if down_key not in state_dict or up_key not in state_dict:
171
+ print(f"⚠ Warning: Missing weights for {base_key}")
172
+ continue
173
+
174
+ down_weight = state_dict.pop(down_key)
175
+ up_weight = state_dict.pop(up_key)
176
+
177
+ # Handle alpha scaling
178
+ has_alpha = False
179
+ if alpha_key in state_dict:
180
+ alpha = state_dict.pop(alpha_key)
181
+ lora_rank = down_weight.shape[0]
182
+ scale = alpha / lora_rank
183
+
184
+ # Calculate scale_down and scale_up to preserve the scale value
185
+ scale_down = scale
186
+ scale_up = 1.0
187
+ while scale_down * 2 < scale_up:
188
+ scale_down *= 2
189
+ scale_up /= 2
190
+
191
+ down_weight = down_weight * scale_down
192
+ up_weight = up_weight * scale_up
193
+ has_alpha = True
194
+
195
+ # Store in PEFT format (lora_A = down, lora_B = up)
196
+ new_down_key = f"{base_key}.lora_A.weight"
197
+ new_up_key = f"{base_key}.lora_B.weight"
198
+
199
+ new_state_dict[new_down_key] = down_weight
200
+ new_state_dict[new_up_key] = up_weight
201
+
202
+ renames.append((down_key, new_down_key, has_alpha))
203
+ renames.append((up_key, new_up_key, has_alpha))
204
+
205
+ # Check for any remaining keys
206
+ remaining = [k for k in state_dict.keys() if not k.startswith("text_encoder")]
207
+ if remaining:
208
+ print(f"⚠ Warning: {len(remaining)} keys were not converted: {remaining[:5]}")
209
+
210
+ print(f"\nRenames applied ({len(renames)} conversions):")
211
+ print("-" * 80)
212
+
213
+ # Group by layer
214
+ current_layer = None
215
+ for old_key, new_key, has_alpha in renames:
216
+ layer = old_key.replace(".lora_down.weight", "").replace(".lora_up.weight", "")
217
+
218
+ if layer != current_layer:
219
+ alpha_str = " (alpha scaled)" if has_alpha else ""
220
+ print(f"\n {layer}{alpha_str}")
221
+ current_layer = layer
222
+
223
+ if ".lora_down.weight" in old_key:
224
+ print(f" .lora_down.weight → .lora_A.weight")
225
+ elif ".lora_up.weight" in old_key:
226
+ print(f" .lora_up.weight → .lora_B.weight")
227
+
228
+ return new_state_dict
229
+
230
+
231
+ def convert_simpletuner_auraflow_to_diffusers(state_dict: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:
232
+ """
233
+ Convert SimpleTuner AuraFlow LoRA format to diffusers PEFT format.
234
+
235
+ SimpleTuner typically saves LoRAs in a format similar to Kohya's sd-scripts,
236
+ but for transformer-based models like AuraFlow, the keys may differ.
237
+ """
238
+ new_state_dict = {}
239
+
240
+ def _convert(original_key, diffusers_key, state_dict, new_state_dict):
241
+ """Helper to convert a single LoRA layer."""
242
+ down_key = f"{original_key}.lora_down.weight"
243
+ if down_key not in state_dict:
244
+ return False
245
+
246
+ down_weight = state_dict.pop(down_key)
247
+ lora_rank = down_weight.shape[0]
248
+
249
+ up_weight_key = f"{original_key}.lora_up.weight"
250
+ up_weight = state_dict.pop(up_weight_key)
251
+
252
+ # Handle alpha scaling
253
+ alpha_key = f"{original_key}.alpha"
254
+ if alpha_key in state_dict:
255
+ alpha = state_dict.pop(alpha_key)
256
+ scale = alpha / lora_rank
257
+
258
+ # Calculate scale_down and scale_up to preserve the scale value
259
+ scale_down = scale
260
+ scale_up = 1.0
261
+ while scale_down * 2 < scale_up:
262
+ scale_down *= 2
263
+ scale_up /= 2
264
+
265
+ down_weight = down_weight * scale_down
266
+ up_weight = up_weight * scale_up
267
+
268
+ # Store in PEFT format (lora_A = down, lora_B = up)
269
+ diffusers_down_key = f"{diffusers_key}.lora_A.weight"
270
+ new_state_dict[diffusers_down_key] = down_weight
271
+ new_state_dict[diffusers_down_key.replace(".lora_A.", ".lora_B.")] = up_weight
272
+
273
+ return True
274
+
275
+ # Get all unique LoRA layer names
276
+ all_unique_keys = {
277
+ k.replace(".lora_down.weight", "").replace(".lora_up.weight", "").replace(".alpha", "")
278
+ for k in state_dict
279
+ if ".lora_down.weight" in k or ".lora_up.weight" in k or ".alpha" in k
280
+ }
281
+
282
+ # Process transformer blocks
283
+ for original_key in sorted(all_unique_keys):
284
+ if original_key.startswith("lora_transformer_single_transformer_blocks_"):
285
+ # Single transformer blocks
286
+ parts = original_key.split("lora_transformer_single_transformer_blocks_")[-1].split("_")
287
+ block_idx = int(parts[0])
288
+ diffusers_key = f"single_transformer_blocks.{block_idx}"
289
+
290
+ # Map the rest of the key
291
+ remaining = "_".join(parts[1:])
292
+ if "attn_to_q" in remaining:
293
+ diffusers_key += ".attn.to_q"
294
+ elif "attn_to_k" in remaining:
295
+ diffusers_key += ".attn.to_k"
296
+ elif "attn_to_v" in remaining:
297
+ diffusers_key += ".attn.to_v"
298
+ elif "proj_out" in remaining:
299
+ diffusers_key += ".proj_out"
300
+ elif "proj_mlp" in remaining:
301
+ diffusers_key += ".proj_mlp"
302
+ elif "norm_linear" in remaining:
303
+ diffusers_key += ".norm.linear"
304
+ else:
305
+ print(f"Warning: Unhandled single block key pattern: {original_key}")
306
+ continue
307
+
308
+ elif original_key.startswith("lora_transformer_transformer_blocks_"):
309
+ # Double transformer blocks
310
+ parts = original_key.split("lora_transformer_transformer_blocks_")[-1].split("_")
311
+ block_idx = int(parts[0])
312
+ diffusers_key = f"transformer_blocks.{block_idx}"
313
+
314
+ # Map the rest of the key
315
+ remaining = "_".join(parts[1:])
316
+ if "attn_to_out_0" in remaining:
317
+ diffusers_key += ".attn.to_out.0"
318
+ elif "attn_to_add_out" in remaining:
319
+ diffusers_key += ".attn.to_add_out"
320
+ elif "attn_to_q" in remaining:
321
+ diffusers_key += ".attn.to_q"
322
+ elif "attn_to_k" in remaining:
323
+ diffusers_key += ".attn.to_k"
324
+ elif "attn_to_v" in remaining:
325
+ diffusers_key += ".attn.to_v"
326
+ elif "attn_add_q_proj" in remaining:
327
+ diffusers_key += ".attn.add_q_proj"
328
+ elif "attn_add_k_proj" in remaining:
329
+ diffusers_key += ".attn.add_k_proj"
330
+ elif "attn_add_v_proj" in remaining:
331
+ diffusers_key += ".attn.add_v_proj"
332
+ elif "ff_net_0_proj" in remaining:
333
+ diffusers_key += ".ff.net.0.proj"
334
+ elif "ff_net_2" in remaining:
335
+ diffusers_key += ".ff.net.2"
336
+ elif "ff_context_net_0_proj" in remaining:
337
+ diffusers_key += ".ff_context.net.0.proj"
338
+ elif "ff_context_net_2" in remaining:
339
+ diffusers_key += ".ff_context.net.2"
340
+ elif "norm1_linear" in remaining:
341
+ diffusers_key += ".norm1.linear"
342
+ elif "norm1_context_linear" in remaining:
343
+ diffusers_key += ".norm1_context.linear"
344
+ else:
345
+ print(f"Warning: Unhandled double block key pattern: {original_key}")
346
+ continue
347
+
348
+ elif original_key.startswith("lora_te1_") or original_key.startswith("lora_te_"):
349
+ # Text encoder keys - handle separately
350
+ print(f"Found text encoder key: {original_key}")
351
+ continue
352
+
353
+ else:
354
+ print(f"Warning: Unknown key pattern: {original_key}")
355
+ continue
356
+
357
+ # Perform the conversion
358
+ _convert(original_key, diffusers_key, state_dict, new_state_dict)
359
+
360
+ # Add "transformer." prefix to all keys
361
+ transformer_state_dict = {
362
+ f"transformer.{k}": v for k, v in new_state_dict.items() if not k.startswith("text_model.")
363
+ }
364
+
365
+ # Check for remaining unconverted keys
366
+ if len(state_dict) > 0:
367
+ remaining_keys = [k for k in state_dict.keys() if not k.startswith("lora_te")]
368
+ if remaining_keys:
369
+ print(f"Warning: Some keys were not converted: {remaining_keys[:10]}")
370
+
371
+ return transformer_state_dict
372
+
373
+
374
+ def convert_lora(input_path: str, output_path: str) -> None:
375
+ """
376
+ Main conversion function.
377
+
378
+ Args:
379
+ input_path: Path to input LoRA safetensors file
380
+ output_path: Path to output diffusers-compatible safetensors file
381
+ """
382
+ print(f"Loading LoRA from: {input_path}")
383
+ state_dict = safetensors.torch.load_file(input_path)
384
+
385
+ print(f"Detecting LoRA format...")
386
+ format_type = detect_lora_format(state_dict)
387
+ print(f"Detected format: {format_type}")
388
+
389
+ if format_type == "peft":
390
+ print("LoRA is already in diffusers-compatible PEFT format!")
391
+ print("No conversion needed. Copying file...")
392
+ import shutil
393
+ shutil.copy(input_path, output_path)
394
+ return
395
+
396
+ elif format_type == "mixed":
397
+ print("Converting MIXED format LoRA to pure PEFT format...")
398
+ print("(Some layers use lora_A/B, others use .lora.down/.lora.up)")
399
+ converted_state_dict = convert_mixed_lora_to_diffusers(state_dict.copy())
400
+
401
+ elif format_type == "simpletuner_transformer":
402
+ print("Converting SimpleTuner transformer format to diffusers...")
403
+ print("(has transformer. prefix but uses lora_down/lora_up naming)")
404
+ converted_state_dict = convert_simpletuner_transformer_to_diffusers(state_dict.copy())
405
+
406
+ elif format_type == "simpletuner_auraflow":
407
+ print("Converting SimpleTuner AuraFlow format to diffusers...")
408
+ converted_state_dict = convert_simpletuner_auraflow_to_diffusers(state_dict.copy())
409
+
410
+ elif format_type == "kohya":
411
+ print("Note: Detected Kohya format. This converter is optimized for AuraFlow.")
412
+ print("For other models, diffusers has built-in conversion.")
413
+ converted_state_dict = convert_simpletuner_auraflow_to_diffusers(state_dict.copy())
414
+
415
+ else:
416
+ print("Error: Unknown LoRA format!")
417
+ print("Sample keys from the state dict:")
418
+ for i, key in enumerate(list(state_dict.keys())[:20]):
419
+ print(f" {key}")
420
+ sys.exit(1)
421
+
422
+ print(f"Saving converted LoRA to: {output_path}")
423
+ safetensors.torch.save_file(converted_state_dict, output_path)
424
+
425
+ print("\nConversion complete!")
426
+ print(f"Original keys: {len(state_dict)}")
427
+ print(f"Converted keys: {len(converted_state_dict)}")
428
+
429
+ def main():
430
+ parser = argparse.ArgumentParser(
431
+ description="Convert SimpleTuner LoRA to diffusers-compatible format",
432
+ formatter_class=argparse.RawDescriptionHelpFormatter,
433
+ epilog="""
434
+ Examples:
435
+ # Convert a SimpleTuner LoRA for AuraFlow
436
+ python convert_simpletuner_lora.py my_lora.safetensors diffusers_lora.safetensors
437
+
438
+ # Check format without converting
439
+ python convert_simpletuner_lora.py my_lora.safetensors /tmp/test.safetensors
440
+ """
441
+ )
442
+
443
+ parser.add_argument(
444
+ "input",
445
+ type=str,
446
+ help="Input LoRA file (SimpleTuner format)"
447
+ )
448
+
449
+ parser.add_argument(
450
+ "output",
451
+ type=str,
452
+ help="Output LoRA file (diffusers-compatible format)"
453
+ )
454
+
455
+ parser.add_argument(
456
+ "--dry-run",
457
+ action="store_true",
458
+ help="Only detect format, don't convert"
459
+ )
460
+
461
+ args = parser.parse_args()
462
+
463
+ # Validate input file exists
464
+ if not Path(args.input).exists():
465
+ print(f"Error: Input file not found: {args.input}")
466
+ sys.exit(1)
467
+
468
+ if args.dry_run:
469
+ print(f"Loading LoRA from: {args.input}")
470
+ state_dict = safetensors.torch.load_file(args.input)
471
+ format_type = detect_lora_format(state_dict)
472
+ print(f"Detected format: {format_type}")
473
+ print(f"\nSample keys ({min(10, len(state_dict))} of {len(state_dict)}):")
474
+ for key in list(state_dict.keys())[:10]:
475
+ print(f" {key}")
476
+ return
477
+
478
+ convert_lora(args.input, args.output)
479
+
480
+
481
+ if __name__ == "__main__":
482
+ main()
483
+
model_index.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AuraFlowPipeline",
3
+ "_diffusers_version": "0.31.0.dev0",
4
+ "_name_or_path": "purplesmartai/pony-v7-base",
5
+ "scheduler": [
6
+ "diffusers",
7
+ "FlowMatchEulerDiscreteScheduler"
8
+ ],
9
+ "text_encoder": [
10
+ "transformers",
11
+ "UMT5EncoderModel"
12
+ ],
13
+ "tokenizer": [
14
+ "transformers",
15
+ "LlamaTokenizerFast"
16
+ ],
17
+ "transformer": [
18
+ "diffusers",
19
+ "AuraFlowTransformer2DModel"
20
+ ],
21
+ "vae": [
22
+ "diffusers",
23
+ "AutoencoderKL"
24
+ ]
25
+ }
safetensor/README.md ADDED
File without changes
safetensor/pony-v7-base.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c9354578234390fc0cf2302b99da1d5dfec324834b536dff8ad328fc0982235
3
+ size 13717249520
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "FlowMatchEulerDiscreteScheduler",
3
+ "_diffusers_version": "0.30.0.dev0",
4
+ "num_train_timesteps": 1000,
5
+ "shift": 1.73
6
+ }
text_encoder/config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/raid/.cache/huggingface/models--fal--AuraFlow/snapshots/edf69bec4c8c57f5278a655aaca3ceb60d82c0b4/text_encoder",
3
+ "architectures": [
4
+ "UMT5EncoderModel"
5
+ ],
6
+ "classifier_dropout": 0.0,
7
+ "d_ff": 5120,
8
+ "d_kv": 64,
9
+ "d_model": 2048,
10
+ "decoder_start_token_id": 0,
11
+ "dense_act_fn": "gelu_new",
12
+ "dropout_rate": 0.1,
13
+ "eos_token_id": 2,
14
+ "feed_forward_proj": "gated-gelu",
15
+ "initializer_factor": 1.0,
16
+ "is_encoder_decoder": true,
17
+ "is_gated_act": true,
18
+ "layer_norm_epsilon": 1e-06,
19
+ "model_type": "umt5",
20
+ "num_decoder_layers": 24,
21
+ "num_heads": 32,
22
+ "num_layers": 24,
23
+ "output_past": true,
24
+ "pad_token_id": 0,
25
+ "relative_attention_max_distance": 128,
26
+ "relative_attention_num_buckets": 32,
27
+ "scalable_attention": true,
28
+ "tie_word_embeddings": false,
29
+ "tokenizer_class": "LlamaTokenizerFast",
30
+ "torch_dtype": "float16",
31
+ "transformers_version": "4.41.2",
32
+ "use_cache": true,
33
+ "vocab_size": 32128
34
+ }
text_encoder/model.fp16.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:decf9b70814ed5e9965bfca9fbd0483462e2bf743790663025b7742f8c014c72
3
+ size 2950448704
text_encoder/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a07449cf1141c0ec86e653c00465f6f0d79c6e58a2c60c8bcf4203d0e4ec4f6
3
+ size 4894234112
tokenizer/added_tokens.json ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<extra_id_0>": 32099,
3
+ "<extra_id_10>": 32089,
4
+ "<extra_id_11>": 32088,
5
+ "<extra_id_12>": 32087,
6
+ "<extra_id_13>": 32086,
7
+ "<extra_id_14>": 32085,
8
+ "<extra_id_15>": 32084,
9
+ "<extra_id_16>": 32083,
10
+ "<extra_id_17>": 32082,
11
+ "<extra_id_18>": 32081,
12
+ "<extra_id_19>": 32080,
13
+ "<extra_id_1>": 32098,
14
+ "<extra_id_20>": 32079,
15
+ "<extra_id_21>": 32078,
16
+ "<extra_id_22>": 32077,
17
+ "<extra_id_23>": 32076,
18
+ "<extra_id_24>": 32075,
19
+ "<extra_id_25>": 32074,
20
+ "<extra_id_26>": 32073,
21
+ "<extra_id_27>": 32072,
22
+ "<extra_id_28>": 32071,
23
+ "<extra_id_29>": 32070,
24
+ "<extra_id_2>": 32097,
25
+ "<extra_id_30>": 32069,
26
+ "<extra_id_31>": 32068,
27
+ "<extra_id_32>": 32067,
28
+ "<extra_id_33>": 32066,
29
+ "<extra_id_34>": 32065,
30
+ "<extra_id_35>": 32064,
31
+ "<extra_id_36>": 32063,
32
+ "<extra_id_37>": 32062,
33
+ "<extra_id_38>": 32061,
34
+ "<extra_id_39>": 32060,
35
+ "<extra_id_3>": 32096,
36
+ "<extra_id_40>": 32059,
37
+ "<extra_id_41>": 32058,
38
+ "<extra_id_42>": 32057,
39
+ "<extra_id_43>": 32056,
40
+ "<extra_id_44>": 32055,
41
+ "<extra_id_45>": 32054,
42
+ "<extra_id_46>": 32053,
43
+ "<extra_id_47>": 32052,
44
+ "<extra_id_48>": 32051,
45
+ "<extra_id_49>": 32050,
46
+ "<extra_id_4>": 32095,
47
+ "<extra_id_50>": 32049,
48
+ "<extra_id_51>": 32048,
49
+ "<extra_id_52>": 32047,
50
+ "<extra_id_53>": 32046,
51
+ "<extra_id_54>": 32045,
52
+ "<extra_id_55>": 32044,
53
+ "<extra_id_56>": 32043,
54
+ "<extra_id_57>": 32042,
55
+ "<extra_id_58>": 32041,
56
+ "<extra_id_59>": 32040,
57
+ "<extra_id_5>": 32094,
58
+ "<extra_id_60>": 32039,
59
+ "<extra_id_61>": 32038,
60
+ "<extra_id_62>": 32037,
61
+ "<extra_id_63>": 32036,
62
+ "<extra_id_64>": 32035,
63
+ "<extra_id_65>": 32034,
64
+ "<extra_id_66>": 32033,
65
+ "<extra_id_67>": 32032,
66
+ "<extra_id_68>": 32031,
67
+ "<extra_id_69>": 32030,
68
+ "<extra_id_6>": 32093,
69
+ "<extra_id_70>": 32029,
70
+ "<extra_id_71>": 32028,
71
+ "<extra_id_72>": 32027,
72
+ "<extra_id_73>": 32026,
73
+ "<extra_id_74>": 32025,
74
+ "<extra_id_75>": 32024,
75
+ "<extra_id_76>": 32023,
76
+ "<extra_id_77>": 32022,
77
+ "<extra_id_78>": 32021,
78
+ "<extra_id_79>": 32020,
79
+ "<extra_id_7>": 32092,
80
+ "<extra_id_80>": 32019,
81
+ "<extra_id_81>": 32018,
82
+ "<extra_id_82>": 32017,
83
+ "<extra_id_83>": 32016,
84
+ "<extra_id_84>": 32015,
85
+ "<extra_id_85>": 32014,
86
+ "<extra_id_86>": 32013,
87
+ "<extra_id_87>": 32012,
88
+ "<extra_id_88>": 32011,
89
+ "<extra_id_89>": 32010,
90
+ "<extra_id_8>": 32091,
91
+ "<extra_id_90>": 32009,
92
+ "<extra_id_91>": 32008,
93
+ "<extra_id_92>": 32007,
94
+ "<extra_id_93>": 32006,
95
+ "<extra_id_94>": 32005,
96
+ "<extra_id_95>": 32004,
97
+ "<extra_id_96>": 32003,
98
+ "<extra_id_97>": 32002,
99
+ "<extra_id_98>": 32001,
100
+ "<extra_id_99>": 32000,
101
+ "<extra_id_9>": 32090
102
+ }
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<extra_id_99>",
4
+ "<extra_id_98>",
5
+ "<extra_id_97>",
6
+ "<extra_id_96>",
7
+ "<extra_id_95>",
8
+ "<extra_id_94>",
9
+ "<extra_id_93>",
10
+ "<extra_id_92>",
11
+ "<extra_id_91>",
12
+ "<extra_id_90>",
13
+ "<extra_id_89>",
14
+ "<extra_id_88>",
15
+ "<extra_id_87>",
16
+ "<extra_id_86>",
17
+ "<extra_id_85>",
18
+ "<extra_id_84>",
19
+ "<extra_id_83>",
20
+ "<extra_id_82>",
21
+ "<extra_id_81>",
22
+ "<extra_id_80>",
23
+ "<extra_id_79>",
24
+ "<extra_id_78>",
25
+ "<extra_id_77>",
26
+ "<extra_id_76>",
27
+ "<extra_id_75>",
28
+ "<extra_id_74>",
29
+ "<extra_id_73>",
30
+ "<extra_id_72>",
31
+ "<extra_id_71>",
32
+ "<extra_id_70>",
33
+ "<extra_id_69>",
34
+ "<extra_id_68>",
35
+ "<extra_id_67>",
36
+ "<extra_id_66>",
37
+ "<extra_id_65>",
38
+ "<extra_id_64>",
39
+ "<extra_id_63>",
40
+ "<extra_id_62>",
41
+ "<extra_id_61>",
42
+ "<extra_id_60>",
43
+ "<extra_id_59>",
44
+ "<extra_id_58>",
45
+ "<extra_id_57>",
46
+ "<extra_id_56>",
47
+ "<extra_id_55>",
48
+ "<extra_id_54>",
49
+ "<extra_id_53>",
50
+ "<extra_id_52>",
51
+ "<extra_id_51>",
52
+ "<extra_id_50>",
53
+ "<extra_id_49>",
54
+ "<extra_id_48>",
55
+ "<extra_id_47>",
56
+ "<extra_id_46>",
57
+ "<extra_id_45>",
58
+ "<extra_id_44>",
59
+ "<extra_id_43>",
60
+ "<extra_id_42>",
61
+ "<extra_id_41>",
62
+ "<extra_id_40>",
63
+ "<extra_id_39>",
64
+ "<extra_id_38>",
65
+ "<extra_id_37>",
66
+ "<extra_id_36>",
67
+ "<extra_id_35>",
68
+ "<extra_id_34>",
69
+ "<extra_id_33>",
70
+ "<extra_id_32>",
71
+ "<extra_id_31>",
72
+ "<extra_id_30>",
73
+ "<extra_id_29>",
74
+ "<extra_id_28>",
75
+ "<extra_id_27>",
76
+ "<extra_id_26>",
77
+ "<extra_id_25>",
78
+ "<extra_id_24>",
79
+ "<extra_id_23>",
80
+ "<extra_id_22>",
81
+ "<extra_id_21>",
82
+ "<extra_id_20>",
83
+ "<extra_id_19>",
84
+ "<extra_id_18>",
85
+ "<extra_id_17>",
86
+ "<extra_id_16>",
87
+ "<extra_id_15>",
88
+ "<extra_id_14>",
89
+ "<extra_id_13>",
90
+ "<extra_id_12>",
91
+ "<extra_id_11>",
92
+ "<extra_id_10>",
93
+ "<extra_id_9>",
94
+ "<extra_id_8>",
95
+ "<extra_id_7>",
96
+ "<extra_id_6>",
97
+ "<extra_id_5>",
98
+ "<extra_id_4>",
99
+ "<extra_id_3>",
100
+ "<extra_id_2>",
101
+ "<extra_id_1>",
102
+ "<extra_id_0>"
103
+ ],
104
+ "bos_token": {
105
+ "content": "<s>",
106
+ "lstrip": false,
107
+ "normalized": false,
108
+ "rstrip": false,
109
+ "single_word": false
110
+ },
111
+ "eos_token": {
112
+ "content": "</s>",
113
+ "lstrip": false,
114
+ "normalized": false,
115
+ "rstrip": false,
116
+ "single_word": false
117
+ },
118
+ "pad_token": {
119
+ "content": "<s>",
120
+ "lstrip": false,
121
+ "normalized": false,
122
+ "rstrip": false,
123
+ "single_word": false
124
+ },
125
+ "unk_token": {
126
+ "content": "<unk>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false
131
+ }
132
+ }
tokenizer/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,945 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": true,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "32000": {
31
+ "content": "<extra_id_99>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ },
38
+ "32001": {
39
+ "content": "<extra_id_98>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false,
44
+ "special": true
45
+ },
46
+ "32002": {
47
+ "content": "<extra_id_97>",
48
+ "lstrip": false,
49
+ "normalized": false,
50
+ "rstrip": false,
51
+ "single_word": false,
52
+ "special": true
53
+ },
54
+ "32003": {
55
+ "content": "<extra_id_96>",
56
+ "lstrip": false,
57
+ "normalized": false,
58
+ "rstrip": false,
59
+ "single_word": false,
60
+ "special": true
61
+ },
62
+ "32004": {
63
+ "content": "<extra_id_95>",
64
+ "lstrip": false,
65
+ "normalized": false,
66
+ "rstrip": false,
67
+ "single_word": false,
68
+ "special": true
69
+ },
70
+ "32005": {
71
+ "content": "<extra_id_94>",
72
+ "lstrip": false,
73
+ "normalized": false,
74
+ "rstrip": false,
75
+ "single_word": false,
76
+ "special": true
77
+ },
78
+ "32006": {
79
+ "content": "<extra_id_93>",
80
+ "lstrip": false,
81
+ "normalized": false,
82
+ "rstrip": false,
83
+ "single_word": false,
84
+ "special": true
85
+ },
86
+ "32007": {
87
+ "content": "<extra_id_92>",
88
+ "lstrip": false,
89
+ "normalized": false,
90
+ "rstrip": false,
91
+ "single_word": false,
92
+ "special": true
93
+ },
94
+ "32008": {
95
+ "content": "<extra_id_91>",
96
+ "lstrip": false,
97
+ "normalized": false,
98
+ "rstrip": false,
99
+ "single_word": false,
100
+ "special": true
101
+ },
102
+ "32009": {
103
+ "content": "<extra_id_90>",
104
+ "lstrip": false,
105
+ "normalized": false,
106
+ "rstrip": false,
107
+ "single_word": false,
108
+ "special": true
109
+ },
110
+ "32010": {
111
+ "content": "<extra_id_89>",
112
+ "lstrip": false,
113
+ "normalized": false,
114
+ "rstrip": false,
115
+ "single_word": false,
116
+ "special": true
117
+ },
118
+ "32011": {
119
+ "content": "<extra_id_88>",
120
+ "lstrip": false,
121
+ "normalized": false,
122
+ "rstrip": false,
123
+ "single_word": false,
124
+ "special": true
125
+ },
126
+ "32012": {
127
+ "content": "<extra_id_87>",
128
+ "lstrip": false,
129
+ "normalized": false,
130
+ "rstrip": false,
131
+ "single_word": false,
132
+ "special": true
133
+ },
134
+ "32013": {
135
+ "content": "<extra_id_86>",
136
+ "lstrip": false,
137
+ "normalized": false,
138
+ "rstrip": false,
139
+ "single_word": false,
140
+ "special": true
141
+ },
142
+ "32014": {
143
+ "content": "<extra_id_85>",
144
+ "lstrip": false,
145
+ "normalized": false,
146
+ "rstrip": false,
147
+ "single_word": false,
148
+ "special": true
149
+ },
150
+ "32015": {
151
+ "content": "<extra_id_84>",
152
+ "lstrip": false,
153
+ "normalized": false,
154
+ "rstrip": false,
155
+ "single_word": false,
156
+ "special": true
157
+ },
158
+ "32016": {
159
+ "content": "<extra_id_83>",
160
+ "lstrip": false,
161
+ "normalized": false,
162
+ "rstrip": false,
163
+ "single_word": false,
164
+ "special": true
165
+ },
166
+ "32017": {
167
+ "content": "<extra_id_82>",
168
+ "lstrip": false,
169
+ "normalized": false,
170
+ "rstrip": false,
171
+ "single_word": false,
172
+ "special": true
173
+ },
174
+ "32018": {
175
+ "content": "<extra_id_81>",
176
+ "lstrip": false,
177
+ "normalized": false,
178
+ "rstrip": false,
179
+ "single_word": false,
180
+ "special": true
181
+ },
182
+ "32019": {
183
+ "content": "<extra_id_80>",
184
+ "lstrip": false,
185
+ "normalized": false,
186
+ "rstrip": false,
187
+ "single_word": false,
188
+ "special": true
189
+ },
190
+ "32020": {
191
+ "content": "<extra_id_79>",
192
+ "lstrip": false,
193
+ "normalized": false,
194
+ "rstrip": false,
195
+ "single_word": false,
196
+ "special": true
197
+ },
198
+ "32021": {
199
+ "content": "<extra_id_78>",
200
+ "lstrip": false,
201
+ "normalized": false,
202
+ "rstrip": false,
203
+ "single_word": false,
204
+ "special": true
205
+ },
206
+ "32022": {
207
+ "content": "<extra_id_77>",
208
+ "lstrip": false,
209
+ "normalized": false,
210
+ "rstrip": false,
211
+ "single_word": false,
212
+ "special": true
213
+ },
214
+ "32023": {
215
+ "content": "<extra_id_76>",
216
+ "lstrip": false,
217
+ "normalized": false,
218
+ "rstrip": false,
219
+ "single_word": false,
220
+ "special": true
221
+ },
222
+ "32024": {
223
+ "content": "<extra_id_75>",
224
+ "lstrip": false,
225
+ "normalized": false,
226
+ "rstrip": false,
227
+ "single_word": false,
228
+ "special": true
229
+ },
230
+ "32025": {
231
+ "content": "<extra_id_74>",
232
+ "lstrip": false,
233
+ "normalized": false,
234
+ "rstrip": false,
235
+ "single_word": false,
236
+ "special": true
237
+ },
238
+ "32026": {
239
+ "content": "<extra_id_73>",
240
+ "lstrip": false,
241
+ "normalized": false,
242
+ "rstrip": false,
243
+ "single_word": false,
244
+ "special": true
245
+ },
246
+ "32027": {
247
+ "content": "<extra_id_72>",
248
+ "lstrip": false,
249
+ "normalized": false,
250
+ "rstrip": false,
251
+ "single_word": false,
252
+ "special": true
253
+ },
254
+ "32028": {
255
+ "content": "<extra_id_71>",
256
+ "lstrip": false,
257
+ "normalized": false,
258
+ "rstrip": false,
259
+ "single_word": false,
260
+ "special": true
261
+ },
262
+ "32029": {
263
+ "content": "<extra_id_70>",
264
+ "lstrip": false,
265
+ "normalized": false,
266
+ "rstrip": false,
267
+ "single_word": false,
268
+ "special": true
269
+ },
270
+ "32030": {
271
+ "content": "<extra_id_69>",
272
+ "lstrip": false,
273
+ "normalized": false,
274
+ "rstrip": false,
275
+ "single_word": false,
276
+ "special": true
277
+ },
278
+ "32031": {
279
+ "content": "<extra_id_68>",
280
+ "lstrip": false,
281
+ "normalized": false,
282
+ "rstrip": false,
283
+ "single_word": false,
284
+ "special": true
285
+ },
286
+ "32032": {
287
+ "content": "<extra_id_67>",
288
+ "lstrip": false,
289
+ "normalized": false,
290
+ "rstrip": false,
291
+ "single_word": false,
292
+ "special": true
293
+ },
294
+ "32033": {
295
+ "content": "<extra_id_66>",
296
+ "lstrip": false,
297
+ "normalized": false,
298
+ "rstrip": false,
299
+ "single_word": false,
300
+ "special": true
301
+ },
302
+ "32034": {
303
+ "content": "<extra_id_65>",
304
+ "lstrip": false,
305
+ "normalized": false,
306
+ "rstrip": false,
307
+ "single_word": false,
308
+ "special": true
309
+ },
310
+ "32035": {
311
+ "content": "<extra_id_64>",
312
+ "lstrip": false,
313
+ "normalized": false,
314
+ "rstrip": false,
315
+ "single_word": false,
316
+ "special": true
317
+ },
318
+ "32036": {
319
+ "content": "<extra_id_63>",
320
+ "lstrip": false,
321
+ "normalized": false,
322
+ "rstrip": false,
323
+ "single_word": false,
324
+ "special": true
325
+ },
326
+ "32037": {
327
+ "content": "<extra_id_62>",
328
+ "lstrip": false,
329
+ "normalized": false,
330
+ "rstrip": false,
331
+ "single_word": false,
332
+ "special": true
333
+ },
334
+ "32038": {
335
+ "content": "<extra_id_61>",
336
+ "lstrip": false,
337
+ "normalized": false,
338
+ "rstrip": false,
339
+ "single_word": false,
340
+ "special": true
341
+ },
342
+ "32039": {
343
+ "content": "<extra_id_60>",
344
+ "lstrip": false,
345
+ "normalized": false,
346
+ "rstrip": false,
347
+ "single_word": false,
348
+ "special": true
349
+ },
350
+ "32040": {
351
+ "content": "<extra_id_59>",
352
+ "lstrip": false,
353
+ "normalized": false,
354
+ "rstrip": false,
355
+ "single_word": false,
356
+ "special": true
357
+ },
358
+ "32041": {
359
+ "content": "<extra_id_58>",
360
+ "lstrip": false,
361
+ "normalized": false,
362
+ "rstrip": false,
363
+ "single_word": false,
364
+ "special": true
365
+ },
366
+ "32042": {
367
+ "content": "<extra_id_57>",
368
+ "lstrip": false,
369
+ "normalized": false,
370
+ "rstrip": false,
371
+ "single_word": false,
372
+ "special": true
373
+ },
374
+ "32043": {
375
+ "content": "<extra_id_56>",
376
+ "lstrip": false,
377
+ "normalized": false,
378
+ "rstrip": false,
379
+ "single_word": false,
380
+ "special": true
381
+ },
382
+ "32044": {
383
+ "content": "<extra_id_55>",
384
+ "lstrip": false,
385
+ "normalized": false,
386
+ "rstrip": false,
387
+ "single_word": false,
388
+ "special": true
389
+ },
390
+ "32045": {
391
+ "content": "<extra_id_54>",
392
+ "lstrip": false,
393
+ "normalized": false,
394
+ "rstrip": false,
395
+ "single_word": false,
396
+ "special": true
397
+ },
398
+ "32046": {
399
+ "content": "<extra_id_53>",
400
+ "lstrip": false,
401
+ "normalized": false,
402
+ "rstrip": false,
403
+ "single_word": false,
404
+ "special": true
405
+ },
406
+ "32047": {
407
+ "content": "<extra_id_52>",
408
+ "lstrip": false,
409
+ "normalized": false,
410
+ "rstrip": false,
411
+ "single_word": false,
412
+ "special": true
413
+ },
414
+ "32048": {
415
+ "content": "<extra_id_51>",
416
+ "lstrip": false,
417
+ "normalized": false,
418
+ "rstrip": false,
419
+ "single_word": false,
420
+ "special": true
421
+ },
422
+ "32049": {
423
+ "content": "<extra_id_50>",
424
+ "lstrip": false,
425
+ "normalized": false,
426
+ "rstrip": false,
427
+ "single_word": false,
428
+ "special": true
429
+ },
430
+ "32050": {
431
+ "content": "<extra_id_49>",
432
+ "lstrip": false,
433
+ "normalized": false,
434
+ "rstrip": false,
435
+ "single_word": false,
436
+ "special": true
437
+ },
438
+ "32051": {
439
+ "content": "<extra_id_48>",
440
+ "lstrip": false,
441
+ "normalized": false,
442
+ "rstrip": false,
443
+ "single_word": false,
444
+ "special": true
445
+ },
446
+ "32052": {
447
+ "content": "<extra_id_47>",
448
+ "lstrip": false,
449
+ "normalized": false,
450
+ "rstrip": false,
451
+ "single_word": false,
452
+ "special": true
453
+ },
454
+ "32053": {
455
+ "content": "<extra_id_46>",
456
+ "lstrip": false,
457
+ "normalized": false,
458
+ "rstrip": false,
459
+ "single_word": false,
460
+ "special": true
461
+ },
462
+ "32054": {
463
+ "content": "<extra_id_45>",
464
+ "lstrip": false,
465
+ "normalized": false,
466
+ "rstrip": false,
467
+ "single_word": false,
468
+ "special": true
469
+ },
470
+ "32055": {
471
+ "content": "<extra_id_44>",
472
+ "lstrip": false,
473
+ "normalized": false,
474
+ "rstrip": false,
475
+ "single_word": false,
476
+ "special": true
477
+ },
478
+ "32056": {
479
+ "content": "<extra_id_43>",
480
+ "lstrip": false,
481
+ "normalized": false,
482
+ "rstrip": false,
483
+ "single_word": false,
484
+ "special": true
485
+ },
486
+ "32057": {
487
+ "content": "<extra_id_42>",
488
+ "lstrip": false,
489
+ "normalized": false,
490
+ "rstrip": false,
491
+ "single_word": false,
492
+ "special": true
493
+ },
494
+ "32058": {
495
+ "content": "<extra_id_41>",
496
+ "lstrip": false,
497
+ "normalized": false,
498
+ "rstrip": false,
499
+ "single_word": false,
500
+ "special": true
501
+ },
502
+ "32059": {
503
+ "content": "<extra_id_40>",
504
+ "lstrip": false,
505
+ "normalized": false,
506
+ "rstrip": false,
507
+ "single_word": false,
508
+ "special": true
509
+ },
510
+ "32060": {
511
+ "content": "<extra_id_39>",
512
+ "lstrip": false,
513
+ "normalized": false,
514
+ "rstrip": false,
515
+ "single_word": false,
516
+ "special": true
517
+ },
518
+ "32061": {
519
+ "content": "<extra_id_38>",
520
+ "lstrip": false,
521
+ "normalized": false,
522
+ "rstrip": false,
523
+ "single_word": false,
524
+ "special": true
525
+ },
526
+ "32062": {
527
+ "content": "<extra_id_37>",
528
+ "lstrip": false,
529
+ "normalized": false,
530
+ "rstrip": false,
531
+ "single_word": false,
532
+ "special": true
533
+ },
534
+ "32063": {
535
+ "content": "<extra_id_36>",
536
+ "lstrip": false,
537
+ "normalized": false,
538
+ "rstrip": false,
539
+ "single_word": false,
540
+ "special": true
541
+ },
542
+ "32064": {
543
+ "content": "<extra_id_35>",
544
+ "lstrip": false,
545
+ "normalized": false,
546
+ "rstrip": false,
547
+ "single_word": false,
548
+ "special": true
549
+ },
550
+ "32065": {
551
+ "content": "<extra_id_34>",
552
+ "lstrip": false,
553
+ "normalized": false,
554
+ "rstrip": false,
555
+ "single_word": false,
556
+ "special": true
557
+ },
558
+ "32066": {
559
+ "content": "<extra_id_33>",
560
+ "lstrip": false,
561
+ "normalized": false,
562
+ "rstrip": false,
563
+ "single_word": false,
564
+ "special": true
565
+ },
566
+ "32067": {
567
+ "content": "<extra_id_32>",
568
+ "lstrip": false,
569
+ "normalized": false,
570
+ "rstrip": false,
571
+ "single_word": false,
572
+ "special": true
573
+ },
574
+ "32068": {
575
+ "content": "<extra_id_31>",
576
+ "lstrip": false,
577
+ "normalized": false,
578
+ "rstrip": false,
579
+ "single_word": false,
580
+ "special": true
581
+ },
582
+ "32069": {
583
+ "content": "<extra_id_30>",
584
+ "lstrip": false,
585
+ "normalized": false,
586
+ "rstrip": false,
587
+ "single_word": false,
588
+ "special": true
589
+ },
590
+ "32070": {
591
+ "content": "<extra_id_29>",
592
+ "lstrip": false,
593
+ "normalized": false,
594
+ "rstrip": false,
595
+ "single_word": false,
596
+ "special": true
597
+ },
598
+ "32071": {
599
+ "content": "<extra_id_28>",
600
+ "lstrip": false,
601
+ "normalized": false,
602
+ "rstrip": false,
603
+ "single_word": false,
604
+ "special": true
605
+ },
606
+ "32072": {
607
+ "content": "<extra_id_27>",
608
+ "lstrip": false,
609
+ "normalized": false,
610
+ "rstrip": false,
611
+ "single_word": false,
612
+ "special": true
613
+ },
614
+ "32073": {
615
+ "content": "<extra_id_26>",
616
+ "lstrip": false,
617
+ "normalized": false,
618
+ "rstrip": false,
619
+ "single_word": false,
620
+ "special": true
621
+ },
622
+ "32074": {
623
+ "content": "<extra_id_25>",
624
+ "lstrip": false,
625
+ "normalized": false,
626
+ "rstrip": false,
627
+ "single_word": false,
628
+ "special": true
629
+ },
630
+ "32075": {
631
+ "content": "<extra_id_24>",
632
+ "lstrip": false,
633
+ "normalized": false,
634
+ "rstrip": false,
635
+ "single_word": false,
636
+ "special": true
637
+ },
638
+ "32076": {
639
+ "content": "<extra_id_23>",
640
+ "lstrip": false,
641
+ "normalized": false,
642
+ "rstrip": false,
643
+ "single_word": false,
644
+ "special": true
645
+ },
646
+ "32077": {
647
+ "content": "<extra_id_22>",
648
+ "lstrip": false,
649
+ "normalized": false,
650
+ "rstrip": false,
651
+ "single_word": false,
652
+ "special": true
653
+ },
654
+ "32078": {
655
+ "content": "<extra_id_21>",
656
+ "lstrip": false,
657
+ "normalized": false,
658
+ "rstrip": false,
659
+ "single_word": false,
660
+ "special": true
661
+ },
662
+ "32079": {
663
+ "content": "<extra_id_20>",
664
+ "lstrip": false,
665
+ "normalized": false,
666
+ "rstrip": false,
667
+ "single_word": false,
668
+ "special": true
669
+ },
670
+ "32080": {
671
+ "content": "<extra_id_19>",
672
+ "lstrip": false,
673
+ "normalized": false,
674
+ "rstrip": false,
675
+ "single_word": false,
676
+ "special": true
677
+ },
678
+ "32081": {
679
+ "content": "<extra_id_18>",
680
+ "lstrip": false,
681
+ "normalized": false,
682
+ "rstrip": false,
683
+ "single_word": false,
684
+ "special": true
685
+ },
686
+ "32082": {
687
+ "content": "<extra_id_17>",
688
+ "lstrip": false,
689
+ "normalized": false,
690
+ "rstrip": false,
691
+ "single_word": false,
692
+ "special": true
693
+ },
694
+ "32083": {
695
+ "content": "<extra_id_16>",
696
+ "lstrip": false,
697
+ "normalized": false,
698
+ "rstrip": false,
699
+ "single_word": false,
700
+ "special": true
701
+ },
702
+ "32084": {
703
+ "content": "<extra_id_15>",
704
+ "lstrip": false,
705
+ "normalized": false,
706
+ "rstrip": false,
707
+ "single_word": false,
708
+ "special": true
709
+ },
710
+ "32085": {
711
+ "content": "<extra_id_14>",
712
+ "lstrip": false,
713
+ "normalized": false,
714
+ "rstrip": false,
715
+ "single_word": false,
716
+ "special": true
717
+ },
718
+ "32086": {
719
+ "content": "<extra_id_13>",
720
+ "lstrip": false,
721
+ "normalized": false,
722
+ "rstrip": false,
723
+ "single_word": false,
724
+ "special": true
725
+ },
726
+ "32087": {
727
+ "content": "<extra_id_12>",
728
+ "lstrip": false,
729
+ "normalized": false,
730
+ "rstrip": false,
731
+ "single_word": false,
732
+ "special": true
733
+ },
734
+ "32088": {
735
+ "content": "<extra_id_11>",
736
+ "lstrip": false,
737
+ "normalized": false,
738
+ "rstrip": false,
739
+ "single_word": false,
740
+ "special": true
741
+ },
742
+ "32089": {
743
+ "content": "<extra_id_10>",
744
+ "lstrip": false,
745
+ "normalized": false,
746
+ "rstrip": false,
747
+ "single_word": false,
748
+ "special": true
749
+ },
750
+ "32090": {
751
+ "content": "<extra_id_9>",
752
+ "lstrip": false,
753
+ "normalized": false,
754
+ "rstrip": false,
755
+ "single_word": false,
756
+ "special": true
757
+ },
758
+ "32091": {
759
+ "content": "<extra_id_8>",
760
+ "lstrip": false,
761
+ "normalized": false,
762
+ "rstrip": false,
763
+ "single_word": false,
764
+ "special": true
765
+ },
766
+ "32092": {
767
+ "content": "<extra_id_7>",
768
+ "lstrip": false,
769
+ "normalized": false,
770
+ "rstrip": false,
771
+ "single_word": false,
772
+ "special": true
773
+ },
774
+ "32093": {
775
+ "content": "<extra_id_6>",
776
+ "lstrip": false,
777
+ "normalized": false,
778
+ "rstrip": false,
779
+ "single_word": false,
780
+ "special": true
781
+ },
782
+ "32094": {
783
+ "content": "<extra_id_5>",
784
+ "lstrip": false,
785
+ "normalized": false,
786
+ "rstrip": false,
787
+ "single_word": false,
788
+ "special": true
789
+ },
790
+ "32095": {
791
+ "content": "<extra_id_4>",
792
+ "lstrip": false,
793
+ "normalized": false,
794
+ "rstrip": false,
795
+ "single_word": false,
796
+ "special": true
797
+ },
798
+ "32096": {
799
+ "content": "<extra_id_3>",
800
+ "lstrip": false,
801
+ "normalized": false,
802
+ "rstrip": false,
803
+ "single_word": false,
804
+ "special": true
805
+ },
806
+ "32097": {
807
+ "content": "<extra_id_2>",
808
+ "lstrip": false,
809
+ "normalized": false,
810
+ "rstrip": false,
811
+ "single_word": false,
812
+ "special": true
813
+ },
814
+ "32098": {
815
+ "content": "<extra_id_1>",
816
+ "lstrip": false,
817
+ "normalized": false,
818
+ "rstrip": false,
819
+ "single_word": false,
820
+ "special": true
821
+ },
822
+ "32099": {
823
+ "content": "<extra_id_0>",
824
+ "lstrip": false,
825
+ "normalized": false,
826
+ "rstrip": false,
827
+ "single_word": false,
828
+ "special": true
829
+ }
830
+ },
831
+ "additional_special_tokens": [
832
+ "<extra_id_99>",
833
+ "<extra_id_98>",
834
+ "<extra_id_97>",
835
+ "<extra_id_96>",
836
+ "<extra_id_95>",
837
+ "<extra_id_94>",
838
+ "<extra_id_93>",
839
+ "<extra_id_92>",
840
+ "<extra_id_91>",
841
+ "<extra_id_90>",
842
+ "<extra_id_89>",
843
+ "<extra_id_88>",
844
+ "<extra_id_87>",
845
+ "<extra_id_86>",
846
+ "<extra_id_85>",
847
+ "<extra_id_84>",
848
+ "<extra_id_83>",
849
+ "<extra_id_82>",
850
+ "<extra_id_81>",
851
+ "<extra_id_80>",
852
+ "<extra_id_79>",
853
+ "<extra_id_78>",
854
+ "<extra_id_77>",
855
+ "<extra_id_76>",
856
+ "<extra_id_75>",
857
+ "<extra_id_74>",
858
+ "<extra_id_73>",
859
+ "<extra_id_72>",
860
+ "<extra_id_71>",
861
+ "<extra_id_70>",
862
+ "<extra_id_69>",
863
+ "<extra_id_68>",
864
+ "<extra_id_67>",
865
+ "<extra_id_66>",
866
+ "<extra_id_65>",
867
+ "<extra_id_64>",
868
+ "<extra_id_63>",
869
+ "<extra_id_62>",
870
+ "<extra_id_61>",
871
+ "<extra_id_60>",
872
+ "<extra_id_59>",
873
+ "<extra_id_58>",
874
+ "<extra_id_57>",
875
+ "<extra_id_56>",
876
+ "<extra_id_55>",
877
+ "<extra_id_54>",
878
+ "<extra_id_53>",
879
+ "<extra_id_52>",
880
+ "<extra_id_51>",
881
+ "<extra_id_50>",
882
+ "<extra_id_49>",
883
+ "<extra_id_48>",
884
+ "<extra_id_47>",
885
+ "<extra_id_46>",
886
+ "<extra_id_45>",
887
+ "<extra_id_44>",
888
+ "<extra_id_43>",
889
+ "<extra_id_42>",
890
+ "<extra_id_41>",
891
+ "<extra_id_40>",
892
+ "<extra_id_39>",
893
+ "<extra_id_38>",
894
+ "<extra_id_37>",
895
+ "<extra_id_36>",
896
+ "<extra_id_35>",
897
+ "<extra_id_34>",
898
+ "<extra_id_33>",
899
+ "<extra_id_32>",
900
+ "<extra_id_31>",
901
+ "<extra_id_30>",
902
+ "<extra_id_29>",
903
+ "<extra_id_28>",
904
+ "<extra_id_27>",
905
+ "<extra_id_26>",
906
+ "<extra_id_25>",
907
+ "<extra_id_24>",
908
+ "<extra_id_23>",
909
+ "<extra_id_22>",
910
+ "<extra_id_21>",
911
+ "<extra_id_20>",
912
+ "<extra_id_19>",
913
+ "<extra_id_18>",
914
+ "<extra_id_17>",
915
+ "<extra_id_16>",
916
+ "<extra_id_15>",
917
+ "<extra_id_14>",
918
+ "<extra_id_13>",
919
+ "<extra_id_12>",
920
+ "<extra_id_11>",
921
+ "<extra_id_10>",
922
+ "<extra_id_9>",
923
+ "<extra_id_8>",
924
+ "<extra_id_7>",
925
+ "<extra_id_6>",
926
+ "<extra_id_5>",
927
+ "<extra_id_4>",
928
+ "<extra_id_3>",
929
+ "<extra_id_2>",
930
+ "<extra_id_1>",
931
+ "<extra_id_0>"
932
+ ],
933
+ "bos_token": "<s>",
934
+ "clean_up_tokenization_spaces": false,
935
+ "eos_token": "</s>",
936
+ "legacy": false,
937
+ "model_max_length": 768,
938
+ "pad_token": "<s>",
939
+ "padding_side": "right",
940
+ "sp_model_kwargs": {},
941
+ "spaces_between_special_tokens": false,
942
+ "tokenizer_class": "LlamaTokenizer",
943
+ "unk_token": "<unk>",
944
+ "use_default_system_prompt": false
945
+ }
transformer/config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AuraFlowTransformer2DModel",
3
+ "_diffusers_version": "0.34.0.dev0",
4
+ "attention_head_dim": 256,
5
+ "caption_projection_dim": 3072,
6
+ "in_channels": 4,
7
+ "joint_attention_dim": 2048,
8
+ "num_attention_heads": 12,
9
+ "num_mmdit_layers": 4,
10
+ "num_single_dit_layers": 32,
11
+ "out_channels": 4,
12
+ "patch_size": 2,
13
+ "pos_embed_max_size": 9216,
14
+ "sample_size": 64
15
+ }
transformer/diffusion_pytorch_model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9193bcfa8e27546ca1daa0f6318dc06fc943c61740a397f8dd40bd88382c7a82
3
+ size 9994324992
transformer/diffusion_pytorch_model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b44e4e6a4d2e7dede29a27bd17bd2e65447111f26f47a4b488b9a731614b212
3
+ size 9890183496
transformer/diffusion_pytorch_model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9b14abad4e7e98999646b595ef93a5e35911785c2d84a921885d744da716a83
3
+ size 7549955048
transformer/diffusion_pytorch_model.safetensors.index.json ADDED
@@ -0,0 +1,338 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 27434422272
4
+ },
5
+ "weight_map": {
6
+ "context_embedder.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
7
+ "joint_transformer_blocks.0.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
8
+ "joint_transformer_blocks.0.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
9
+ "joint_transformer_blocks.0.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
10
+ "joint_transformer_blocks.0.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
11
+ "joint_transformer_blocks.0.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
12
+ "joint_transformer_blocks.0.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
13
+ "joint_transformer_blocks.0.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
14
+ "joint_transformer_blocks.0.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
15
+ "joint_transformer_blocks.0.ff.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
16
+ "joint_transformer_blocks.0.ff.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
17
+ "joint_transformer_blocks.0.ff.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
18
+ "joint_transformer_blocks.0.ff_context.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
19
+ "joint_transformer_blocks.0.ff_context.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
20
+ "joint_transformer_blocks.0.ff_context.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
21
+ "joint_transformer_blocks.0.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
22
+ "joint_transformer_blocks.0.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
23
+ "joint_transformer_blocks.1.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
24
+ "joint_transformer_blocks.1.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
25
+ "joint_transformer_blocks.1.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
26
+ "joint_transformer_blocks.1.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
27
+ "joint_transformer_blocks.1.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
28
+ "joint_transformer_blocks.1.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
29
+ "joint_transformer_blocks.1.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
30
+ "joint_transformer_blocks.1.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
31
+ "joint_transformer_blocks.1.ff.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
32
+ "joint_transformer_blocks.1.ff.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
33
+ "joint_transformer_blocks.1.ff.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
34
+ "joint_transformer_blocks.1.ff_context.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
35
+ "joint_transformer_blocks.1.ff_context.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
36
+ "joint_transformer_blocks.1.ff_context.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
37
+ "joint_transformer_blocks.1.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
38
+ "joint_transformer_blocks.1.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
39
+ "joint_transformer_blocks.2.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
40
+ "joint_transformer_blocks.2.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
41
+ "joint_transformer_blocks.2.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
42
+ "joint_transformer_blocks.2.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
43
+ "joint_transformer_blocks.2.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
44
+ "joint_transformer_blocks.2.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
45
+ "joint_transformer_blocks.2.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
46
+ "joint_transformer_blocks.2.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
47
+ "joint_transformer_blocks.2.ff.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
48
+ "joint_transformer_blocks.2.ff.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
49
+ "joint_transformer_blocks.2.ff.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
50
+ "joint_transformer_blocks.2.ff_context.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
51
+ "joint_transformer_blocks.2.ff_context.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
52
+ "joint_transformer_blocks.2.ff_context.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
53
+ "joint_transformer_blocks.2.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
54
+ "joint_transformer_blocks.2.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
55
+ "joint_transformer_blocks.3.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
56
+ "joint_transformer_blocks.3.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
57
+ "joint_transformer_blocks.3.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
58
+ "joint_transformer_blocks.3.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
59
+ "joint_transformer_blocks.3.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
60
+ "joint_transformer_blocks.3.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
61
+ "joint_transformer_blocks.3.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
62
+ "joint_transformer_blocks.3.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
63
+ "joint_transformer_blocks.3.ff.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
64
+ "joint_transformer_blocks.3.ff.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
65
+ "joint_transformer_blocks.3.ff.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
66
+ "joint_transformer_blocks.3.ff_context.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
67
+ "joint_transformer_blocks.3.ff_context.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
68
+ "joint_transformer_blocks.3.ff_context.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
69
+ "joint_transformer_blocks.3.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
70
+ "joint_transformer_blocks.3.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
71
+ "norm_out.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
72
+ "pos_embed.pos_embed": "diffusion_pytorch_model-00001-of-00003.safetensors",
73
+ "pos_embed.proj.bias": "diffusion_pytorch_model-00001-of-00003.safetensors",
74
+ "pos_embed.proj.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
75
+ "proj_out.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
76
+ "register_tokens": "diffusion_pytorch_model-00001-of-00003.safetensors",
77
+ "single_transformer_blocks.0.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
78
+ "single_transformer_blocks.0.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
79
+ "single_transformer_blocks.0.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
80
+ "single_transformer_blocks.0.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
81
+ "single_transformer_blocks.0.ff.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
82
+ "single_transformer_blocks.0.ff.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
83
+ "single_transformer_blocks.0.ff.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
84
+ "single_transformer_blocks.0.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
85
+ "single_transformer_blocks.1.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
86
+ "single_transformer_blocks.1.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
87
+ "single_transformer_blocks.1.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
88
+ "single_transformer_blocks.1.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
89
+ "single_transformer_blocks.1.ff.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
90
+ "single_transformer_blocks.1.ff.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
91
+ "single_transformer_blocks.1.ff.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
92
+ "single_transformer_blocks.1.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
93
+ "single_transformer_blocks.10.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
94
+ "single_transformer_blocks.10.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
95
+ "single_transformer_blocks.10.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
96
+ "single_transformer_blocks.10.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
97
+ "single_transformer_blocks.10.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
98
+ "single_transformer_blocks.10.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
99
+ "single_transformer_blocks.10.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
100
+ "single_transformer_blocks.10.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
101
+ "single_transformer_blocks.11.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
102
+ "single_transformer_blocks.11.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
103
+ "single_transformer_blocks.11.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
104
+ "single_transformer_blocks.11.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
105
+ "single_transformer_blocks.11.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
106
+ "single_transformer_blocks.11.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
107
+ "single_transformer_blocks.11.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
108
+ "single_transformer_blocks.11.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
109
+ "single_transformer_blocks.12.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
110
+ "single_transformer_blocks.12.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
111
+ "single_transformer_blocks.12.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
112
+ "single_transformer_blocks.12.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
113
+ "single_transformer_blocks.12.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
114
+ "single_transformer_blocks.12.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
115
+ "single_transformer_blocks.12.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
116
+ "single_transformer_blocks.12.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
117
+ "single_transformer_blocks.13.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
118
+ "single_transformer_blocks.13.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
119
+ "single_transformer_blocks.13.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
120
+ "single_transformer_blocks.13.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
121
+ "single_transformer_blocks.13.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
122
+ "single_transformer_blocks.13.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
123
+ "single_transformer_blocks.13.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
124
+ "single_transformer_blocks.13.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
125
+ "single_transformer_blocks.14.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
126
+ "single_transformer_blocks.14.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
127
+ "single_transformer_blocks.14.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
128
+ "single_transformer_blocks.14.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
129
+ "single_transformer_blocks.14.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
130
+ "single_transformer_blocks.14.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
131
+ "single_transformer_blocks.14.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
132
+ "single_transformer_blocks.14.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
133
+ "single_transformer_blocks.15.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
134
+ "single_transformer_blocks.15.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
135
+ "single_transformer_blocks.15.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
136
+ "single_transformer_blocks.15.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
137
+ "single_transformer_blocks.15.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
138
+ "single_transformer_blocks.15.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
139
+ "single_transformer_blocks.15.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
140
+ "single_transformer_blocks.15.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
141
+ "single_transformer_blocks.16.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
142
+ "single_transformer_blocks.16.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
143
+ "single_transformer_blocks.16.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
144
+ "single_transformer_blocks.16.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
145
+ "single_transformer_blocks.16.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
146
+ "single_transformer_blocks.16.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
147
+ "single_transformer_blocks.16.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
148
+ "single_transformer_blocks.16.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
149
+ "single_transformer_blocks.17.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
150
+ "single_transformer_blocks.17.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
151
+ "single_transformer_blocks.17.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
152
+ "single_transformer_blocks.17.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
153
+ "single_transformer_blocks.17.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
154
+ "single_transformer_blocks.17.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
155
+ "single_transformer_blocks.17.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
156
+ "single_transformer_blocks.17.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
157
+ "single_transformer_blocks.18.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
158
+ "single_transformer_blocks.18.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
159
+ "single_transformer_blocks.18.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
160
+ "single_transformer_blocks.18.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
161
+ "single_transformer_blocks.18.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
162
+ "single_transformer_blocks.18.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
163
+ "single_transformer_blocks.18.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
164
+ "single_transformer_blocks.18.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
165
+ "single_transformer_blocks.19.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
166
+ "single_transformer_blocks.19.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
167
+ "single_transformer_blocks.19.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
168
+ "single_transformer_blocks.19.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
169
+ "single_transformer_blocks.19.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
170
+ "single_transformer_blocks.19.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
171
+ "single_transformer_blocks.19.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
172
+ "single_transformer_blocks.19.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
173
+ "single_transformer_blocks.2.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
174
+ "single_transformer_blocks.2.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
175
+ "single_transformer_blocks.2.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
176
+ "single_transformer_blocks.2.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
177
+ "single_transformer_blocks.2.ff.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
178
+ "single_transformer_blocks.2.ff.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
179
+ "single_transformer_blocks.2.ff.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
180
+ "single_transformer_blocks.2.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
181
+ "single_transformer_blocks.20.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
182
+ "single_transformer_blocks.20.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
183
+ "single_transformer_blocks.20.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
184
+ "single_transformer_blocks.20.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
185
+ "single_transformer_blocks.20.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
186
+ "single_transformer_blocks.20.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
187
+ "single_transformer_blocks.20.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
188
+ "single_transformer_blocks.20.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
189
+ "single_transformer_blocks.21.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
190
+ "single_transformer_blocks.21.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
191
+ "single_transformer_blocks.21.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
192
+ "single_transformer_blocks.21.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
193
+ "single_transformer_blocks.21.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
194
+ "single_transformer_blocks.21.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
195
+ "single_transformer_blocks.21.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
196
+ "single_transformer_blocks.21.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
197
+ "single_transformer_blocks.22.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
198
+ "single_transformer_blocks.22.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
199
+ "single_transformer_blocks.22.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
200
+ "single_transformer_blocks.22.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
201
+ "single_transformer_blocks.22.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
202
+ "single_transformer_blocks.22.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
203
+ "single_transformer_blocks.22.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
204
+ "single_transformer_blocks.22.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
205
+ "single_transformer_blocks.23.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
206
+ "single_transformer_blocks.23.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
207
+ "single_transformer_blocks.23.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
208
+ "single_transformer_blocks.23.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
209
+ "single_transformer_blocks.23.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
210
+ "single_transformer_blocks.23.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
211
+ "single_transformer_blocks.23.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
212
+ "single_transformer_blocks.23.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
213
+ "single_transformer_blocks.24.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
214
+ "single_transformer_blocks.24.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
215
+ "single_transformer_blocks.24.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
216
+ "single_transformer_blocks.24.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
217
+ "single_transformer_blocks.24.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
218
+ "single_transformer_blocks.24.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
219
+ "single_transformer_blocks.24.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
220
+ "single_transformer_blocks.24.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
221
+ "single_transformer_blocks.25.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
222
+ "single_transformer_blocks.25.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
223
+ "single_transformer_blocks.25.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
224
+ "single_transformer_blocks.25.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
225
+ "single_transformer_blocks.25.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
226
+ "single_transformer_blocks.25.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
227
+ "single_transformer_blocks.25.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
228
+ "single_transformer_blocks.25.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
229
+ "single_transformer_blocks.26.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
230
+ "single_transformer_blocks.26.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
231
+ "single_transformer_blocks.26.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
232
+ "single_transformer_blocks.26.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
233
+ "single_transformer_blocks.26.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
234
+ "single_transformer_blocks.26.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
235
+ "single_transformer_blocks.26.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
236
+ "single_transformer_blocks.26.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
237
+ "single_transformer_blocks.27.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
238
+ "single_transformer_blocks.27.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
239
+ "single_transformer_blocks.27.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
240
+ "single_transformer_blocks.27.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
241
+ "single_transformer_blocks.27.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
242
+ "single_transformer_blocks.27.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
243
+ "single_transformer_blocks.27.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
244
+ "single_transformer_blocks.27.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
245
+ "single_transformer_blocks.28.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
246
+ "single_transformer_blocks.28.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
247
+ "single_transformer_blocks.28.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
248
+ "single_transformer_blocks.28.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
249
+ "single_transformer_blocks.28.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
250
+ "single_transformer_blocks.28.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
251
+ "single_transformer_blocks.28.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
252
+ "single_transformer_blocks.28.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
253
+ "single_transformer_blocks.29.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
254
+ "single_transformer_blocks.29.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
255
+ "single_transformer_blocks.29.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
256
+ "single_transformer_blocks.29.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
257
+ "single_transformer_blocks.29.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
258
+ "single_transformer_blocks.29.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
259
+ "single_transformer_blocks.29.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
260
+ "single_transformer_blocks.29.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
261
+ "single_transformer_blocks.3.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
262
+ "single_transformer_blocks.3.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
263
+ "single_transformer_blocks.3.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
264
+ "single_transformer_blocks.3.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
265
+ "single_transformer_blocks.3.ff.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
266
+ "single_transformer_blocks.3.ff.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
267
+ "single_transformer_blocks.3.ff.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
268
+ "single_transformer_blocks.3.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
269
+ "single_transformer_blocks.30.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
270
+ "single_transformer_blocks.30.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
271
+ "single_transformer_blocks.30.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
272
+ "single_transformer_blocks.30.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
273
+ "single_transformer_blocks.30.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
274
+ "single_transformer_blocks.30.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
275
+ "single_transformer_blocks.30.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
276
+ "single_transformer_blocks.30.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
277
+ "single_transformer_blocks.31.attn.to_k.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
278
+ "single_transformer_blocks.31.attn.to_out.0.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
279
+ "single_transformer_blocks.31.attn.to_q.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
280
+ "single_transformer_blocks.31.attn.to_v.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
281
+ "single_transformer_blocks.31.ff.linear_1.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
282
+ "single_transformer_blocks.31.ff.linear_2.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
283
+ "single_transformer_blocks.31.ff.out_projection.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
284
+ "single_transformer_blocks.31.norm1.linear.weight": "diffusion_pytorch_model-00003-of-00003.safetensors",
285
+ "single_transformer_blocks.4.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
286
+ "single_transformer_blocks.4.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
287
+ "single_transformer_blocks.4.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
288
+ "single_transformer_blocks.4.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
289
+ "single_transformer_blocks.4.ff.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
290
+ "single_transformer_blocks.4.ff.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
291
+ "single_transformer_blocks.4.ff.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
292
+ "single_transformer_blocks.4.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
293
+ "single_transformer_blocks.5.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
294
+ "single_transformer_blocks.5.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
295
+ "single_transformer_blocks.5.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
296
+ "single_transformer_blocks.5.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
297
+ "single_transformer_blocks.5.ff.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
298
+ "single_transformer_blocks.5.ff.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
299
+ "single_transformer_blocks.5.ff.out_projection.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
300
+ "single_transformer_blocks.5.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
301
+ "single_transformer_blocks.6.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
302
+ "single_transformer_blocks.6.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
303
+ "single_transformer_blocks.6.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
304
+ "single_transformer_blocks.6.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
305
+ "single_transformer_blocks.6.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
306
+ "single_transformer_blocks.6.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
307
+ "single_transformer_blocks.6.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
308
+ "single_transformer_blocks.6.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
309
+ "single_transformer_blocks.7.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
310
+ "single_transformer_blocks.7.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
311
+ "single_transformer_blocks.7.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
312
+ "single_transformer_blocks.7.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
313
+ "single_transformer_blocks.7.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
314
+ "single_transformer_blocks.7.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
315
+ "single_transformer_blocks.7.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
316
+ "single_transformer_blocks.7.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
317
+ "single_transformer_blocks.8.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
318
+ "single_transformer_blocks.8.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
319
+ "single_transformer_blocks.8.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
320
+ "single_transformer_blocks.8.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
321
+ "single_transformer_blocks.8.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
322
+ "single_transformer_blocks.8.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
323
+ "single_transformer_blocks.8.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
324
+ "single_transformer_blocks.8.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
325
+ "single_transformer_blocks.9.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
326
+ "single_transformer_blocks.9.attn.to_out.0.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
327
+ "single_transformer_blocks.9.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
328
+ "single_transformer_blocks.9.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
329
+ "single_transformer_blocks.9.ff.linear_1.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
330
+ "single_transformer_blocks.9.ff.linear_2.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
331
+ "single_transformer_blocks.9.ff.out_projection.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
332
+ "single_transformer_blocks.9.norm1.linear.weight": "diffusion_pytorch_model-00002-of-00003.safetensors",
333
+ "time_step_proj.linear_1.bias": "diffusion_pytorch_model-00001-of-00003.safetensors",
334
+ "time_step_proj.linear_1.weight": "diffusion_pytorch_model-00001-of-00003.safetensors",
335
+ "time_step_proj.linear_2.bias": "diffusion_pytorch_model-00001-of-00003.safetensors",
336
+ "time_step_proj.linear_2.weight": "diffusion_pytorch_model-00001-of-00003.safetensors"
337
+ }
338
+ }
vae/config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.30.0.dev0",
4
+ "_name_or_path": "/raid/.cache/huggingface/models--fal--AuraFlow/snapshots/edf69bec4c8c57f5278a655aaca3ceb60d82c0b4/vae",
5
+ "act_fn": "silu",
6
+ "block_out_channels": [
7
+ 128,
8
+ 256,
9
+ 512,
10
+ 512
11
+ ],
12
+ "down_block_types": [
13
+ "DownEncoderBlock2D",
14
+ "DownEncoderBlock2D",
15
+ "DownEncoderBlock2D",
16
+ "DownEncoderBlock2D"
17
+ ],
18
+ "force_upcast": true,
19
+ "in_channels": 3,
20
+ "latent_channels": 4,
21
+ "latents_mean": null,
22
+ "latents_std": null,
23
+ "layers_per_block": 2,
24
+ "norm_num_groups": 32,
25
+ "out_channels": 3,
26
+ "sample_size": 1024,
27
+ "scaling_factor": 0.13025,
28
+ "shift_factor": null,
29
+ "up_block_types": [
30
+ "UpDecoderBlock2D",
31
+ "UpDecoderBlock2D",
32
+ "UpDecoderBlock2D",
33
+ "UpDecoderBlock2D"
34
+ ],
35
+ "use_post_quant_conv": true,
36
+ "use_quant_conv": true
37
+ }
vae/diffusion_pytorch_model.fp16.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcb60880a46b63dea58e9bc591abe15f8350bde47b405f9c38f4be70c6161e68
3
+ size 167335342
vae/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1598f3d24932bcfe6634e8b618ea1e30ab1d57f5aad13a6d2de446d2199f2341
3
+ size 334643268
workflows/README.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pony V7 ComfyUI Workflows
2
+
3
+ ## How to Use
4
+
5
+ Simply drag and drop any of the workflow images below directly into your ComfyUI canvas to load them. ComfyUI will automatically parse the embedded workflow data from the image.
6
+
7
+ ## Available Workflows
8
+
9
+ ### [Basic Workflow](pony-v7-simple.png)
10
+ ![Basic Workflow](pony-v7-simple.png)
11
+
12
+ A simple, straightforward workflow for generating images with Pony V7 using the SafeTensor. Perfect for getting started quickly.
13
+
14
+ ### [GGUF Workflow](pony-v7-simple-gguf.png)
15
+ ![GGUF Workflow](pony-v7-simple-gguf.png)
16
+
17
+ Workflow optimized for using GGUF quantized models. Requires the [GGUF nodes by City96](https://github.com/city96/ComfyUI-GGUF). GGUF formats allow lower VRAM use with minimal degradation to quality and no noticeable impact to performance. Q8_0 version is recommended. See the [GGUF README](../gguf/) for more details.
18
+
19
+ ### [LoRA Workflow](pony-v7-lora.png)
20
+ ![LoRA Workflow](pony-v7-lora.png)
21
+
22
+ Workflow demonstrating how to use LoRA models with Pony V7. Shows proper setup for loading and applying LoRA weights to enhance or modify generation results. See the [LoRA README](../lora/) for training and conversion information.
23
+
24
+ ### [Noise Selection Workflow](pony-v7-noise-selection.png)
25
+ ![Noise Selection Workflow](pony-v7-noise-selection.png)
26
+
27
+ Advanced workflow featuring the custom PonyNoise node that allows switching between GPU and CPU noise generation. Use this to match `diffusers` output or ensure cross-platform consistency. Requires the [PonyNoise node](../comfy_nodes/).
28
+
workflows/pony-v7-lora.png ADDED

Git LFS Details

  • SHA256: 63b321bb13d08a227975a7cc28c4f8be18be6088edbf2fad03849b5e1ea1c77e
  • Pointer size: 132 Bytes
  • Size of remote file: 2.4 MB
workflows/pony-v7-noise-selection.png ADDED

Git LFS Details

  • SHA256: 692163fa170d6f860862268556e9113a4a48031d92a4bc863098d9420351dd67
  • Pointer size: 132 Bytes
  • Size of remote file: 2.4 MB
workflows/pony-v7-simple-gguf.png ADDED

Git LFS Details

  • SHA256: bfad8a10e945e2a68de228efdbdcadc43cb9c3170aa686d8b23e8cdc6e7658ab
  • Pointer size: 132 Bytes
  • Size of remote file: 2.39 MB
workflows/pony-v7-simple.png ADDED

Git LFS Details

  • SHA256: 06ab3be36b4ab1a333d07329db1b31db3a37a1a87b5677ba172e8971c0ab9014
  • Pointer size: 132 Bytes
  • Size of remote file: 2.39 MB