nvvaulin commited on
Commit
0b40748
·
verified ·
1 Parent(s): fabecaf

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -42,3 +42,14 @@ assets/sbs/kandinsky_5_video_lite_vs_wan_2.1_14B.jpg filter=lfs diff=lfs merge=l
42
  assets/sbs/kandinsky_5_video_lite_vs_wan_2.2_5B.jpg filter=lfs diff=lfs merge=lfs -text
43
  assets/sbs/kandinsky_5_video_lite_vs_wan_2.2_A14B.jpg filter=lfs diff=lfs merge=lfs -text
44
  assets/vbench.png filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
42
  assets/sbs/kandinsky_5_video_lite_vs_wan_2.2_5B.jpg filter=lfs diff=lfs merge=lfs -text
43
  assets/sbs/kandinsky_5_video_lite_vs_wan_2.2_A14B.jpg filter=lfs diff=lfs merge=lfs -text
44
  assets/vbench.png filter=lfs diff=lfs merge=lfs -text
45
+ assets/generation_examples/images/1.jpg filter=lfs diff=lfs merge=lfs -text
46
+ assets/generation_examples/images/2.jpg filter=lfs diff=lfs merge=lfs -text
47
+ assets/generation_examples/images/3.jpg filter=lfs diff=lfs merge=lfs -text
48
+ assets/generation_examples/images/4.jpg filter=lfs diff=lfs merge=lfs -text
49
+ assets/generation_examples/images/5.jpg filter=lfs diff=lfs merge=lfs -text
50
+ assets/generation_examples/images/6.jpg filter=lfs diff=lfs merge=lfs -text
51
+ assets/generation_examples/images/7.jpg filter=lfs diff=lfs merge=lfs -text
52
+ assets/generation_examples/images/8.jpg filter=lfs diff=lfs merge=lfs -text
53
+ assets/generation_examples/images/9.jpg filter=lfs diff=lfs merge=lfs -text
54
+ assets/sbs_edit.png filter=lfs diff=lfs merge=lfs -text
55
+ assets/sbs_image.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
4
  <div align="center">
5
  <picture>
6
  <img src="assets/KANDINSKY_LOGO_1_BLACK.png">
@@ -8,177 +9,133 @@ license: apache-2.0
8
  </div>
9
 
10
  <div align="center">
11
- <a href="https://habr.com/ru/companies/sberbank/articles/951800/">Habr</a> | <a href="https://ai-forever.github.io/Kandinsky-5/">Project Page</a> | Technical Report (soon) | <a href="https://github.com/ai-forever/Kandinsky-5">Original Github</a> | <a href="https://huggingface.co/collections/ai-forever/kandinsky-50-t2v-lite-diffusers-68dd73ebac816748ed79d6cb"> 🤗 Diffusers</a>
 
 
 
 
12
  </div>
13
 
14
  -----
15
 
16
- <h1>Kandinsky 5.0 T2V Lite - Diffusers</h1>
 
 
17
 
18
- This repository provides the 🤗 Diffusers integration for Kandinsky 5.0 T2V Lite - a lightweight video generation model (2B parameters) that ranks #1 among open-source models in its class.
19
 
20
- ## Project Updates
 
 
 
 
21
 
22
- - 🔥 **2025/09/29**: We have open-sourced `Kandinsky 5.0 T2V Lite` a lite (2B parameters) version of `Kandinsky 5.0 Video` text-to-video generation model.
23
- - 🚀 **Diffusers Integration**: Now available with easy-to-use 🤗 Diffusers pipeline!
24
 
25
- ## Kandinsky 5.0 T2V Lite
26
 
27
- Kandinsky 5.0 T2V Lite is a lightweight video generation model (2B parameters) that ranks #1 among open-source models in its class. It outperforms larger Wan models (5B and 14B) and offers the best understanding of Russian concepts in the open-source ecosystem.
28
 
29
- We provide 8 model variants, each optimized for different use cases:
 
 
 
 
 
 
30
 
31
- * **SFT model** — delivers the highest generation quality
32
- * **CFG-distilled** — runs 2× faster
33
- * **Diffusion-distilled** — enables low-latency generation with minimal quality loss (6× faster)
34
- * **Pretrain model** — designed for fine-tuning by researchers and enthusiasts
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
- ## Basic Usage
38
  ```python
39
  import torch
40
- from diffusers import Kandinsky5T2VPipeline
41
- from diffusers.utils import export_to_video
42
 
43
  # Load the pipeline
44
- pipe = Kandinsky5T2VPipeline.from_pretrained(
45
- "ai-forever/Kandinsky-5.0-T2V-Lite-sft-5s-Diffusers",
46
- torch_dtype=torch.bfloat16
47
- )
48
- pipe = pipe.to("cuda")
49
 
50
- # Generate video
51
- prompt = "A cat and a dog baking a cake together in a kitchen."
52
- negative_prompt = "Static, 2D cartoon, cartoon, 2d animation, paintings, images, worst quality, low quality, ugly, deformed, walking backwards"
53
 
54
  output = pipe(
55
  prompt=prompt,
56
- negative_prompt=negative_prompt,
57
- height=512,
58
- width=768,
59
- num_frames=121,
60
  num_inference_steps=50,
61
- guidance_scale=5.0,
62
- ).frames[0]
63
-
64
- ## Save the video
65
- export_to_video(output, "output.mp4", fps=24, quality=9)
66
  ```
67
 
68
- ## Using Different Model Variants
69
- ```python
70
- import torch
71
- from diffusers import Kandinsky5T2VPipeline
72
-
73
- # 5s SFT model (highest quality)
74
- pipe_sft = Kandinsky5T2VPipeline.from_pretrained(
75
- "ai-forever/Kandinsky-5.0-T2V-Lite-sft-5s-Diffusers",
76
- torch_dtype=torch.bfloat16
77
- )
78
-
79
- # 5s Distilled 16-step model (fastest)
80
- pipe_distill = Kandinsky5T2VPipeline.from_pretrained(
81
- "ai-forever/Kandinsky-5.0-T2V-Lite-distilled16steps-5s-Diffusers",
82
- torch_dtype=torch.bfloat16
83
- )
84
-
85
- # 5s No-CFG model (balanced speed/quality)
86
- pipe_nocfg = Kandinsky5T2VPipeline.from_pretrained(
87
- "ai-forever/Kandinsky-5.0-T2V-Lite-nocfg-5s-Diffusers",
88
- torch_dtype=torch.bfloat16
89
- )
90
-
91
- # 5s Pretrain model (most diverse)
92
- pipe_pretrain = Kandinsky5T2VPipeline.from_pretrained(
93
- "ai-forever/Kandinsky-5.0-T2V-Lite-pretrain-5s-Diffusers",
94
- torch_dtype=torch.bfloat16
95
- )
96
-
97
- # 10s SFT model (highest quality)
98
- pipe_sft = Kandinsky5T2VPipeline.from_pretrained(
99
- "ai-forever/Kandinsky-5.0-T2V-Lite-sft-10s-Diffusers",
100
- torch_dtype=torch.bfloat16
101
- )
102
-
103
- # 10s Distilled 16-step model (fastest)
104
- pipe_distill = Kandinsky5T2VPipeline.from_pretrained(
105
- "ai-forever/Kandinsky-5.0-T2V-Lite-distilled16steps-10s-Diffusers",
106
- torch_dtype=torch.bfloat16
107
- )
108
-
109
- # 10s No-CFG model (balanced speed/quality)
110
- pipe_nocfg = Kandinsky5T2VPipeline.from_pretrained(
111
- "ai-forever/Kandinsky-5.0-T2V-Lite-nocfg-10s-Diffusers",
112
- torch_dtype=torch.bfloat16
113
- )
114
-
115
- # 10s Pretrain model (most diverse)
116
- pipe_pretrain = Kandinsky5T2VPipeline.from_pretrained(
117
- "ai-forever/Kandinsky-5.0-T2V-Lite-pretrain-10s-Diffusers",
118
- torch_dtype=torch.bfloat16
119
- )
120
- ```
121
-
122
- ## Architecture
123
- Latent diffusion pipeline with Flow Matching.
124
-
125
- Diffusion Transformer (DiT) as the main generative backbone with cross-attention to text embeddings.
126
-
127
- Qwen2.5-VL and CLIP provides text embeddings
128
-
129
- HunyuanVideo 3D VAE encodes/decodes video into a latent space
130
-
131
- DiT is the main generative module using cross-attention to condition on text
132
-
133
- <div align="center">
134
- <img width="1600" height="477" alt="Pipeline Architecture" src="https://github.com/user-attachments/assets/17fc2eb5-05e3-4591-9ec6-0f6e1ca397b3" />
135
- </div>
136
-
137
- <div align="center">
138
- <img width="800" height="406" alt="Model Architecture" src="https://github.com/user-attachments/assets/f3006742-e261-4c39-b7dc-e39330be9a09" />
139
- </div>
140
-
141
- ## Examples
142
-
143
- Kandinsky 5.0 T2V Lite SFT
144
- <table border="0" style="width: 200; text-align: left; margin-top: 20px;"> <tr> <td> <video src="https://github.com/user-attachments/assets/bc38821b-f9f1-46db-885f-1f70464669eb" width=200 controls autoplay loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/9f64c940-4df8-4c51-bd81-a05de8e70fc3" width=200 controls autoplay loop></video> </td> <tr> <td> <video src="https://github.com/user-attachments/assets/77dd417f-e0bf-42bd-8d80-daffcd054add" width=200 controls autoplay loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/385a0076-f01c-4663-aa46-6ce50352b9ed" width=200 controls autoplay loop></video> </td> <tr> <td> <video src="https://github.com/user-attachments/assets/7c1bcb31-cc7d-4385-9a33-2b0cc28393dd" width=200 controls autoplay loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/990a8a0b-2df1-4bbc-b2e3-2859b6f1eea6" width=200 controls autoplay loop></video> </td> </tr> </table>
145
- Kandinsky 5.0 T2V Lite Distill
146
- <table border="0" style="width: 200; text-align: left; margin-top: 20px;"> <tr> <td> <video src="https://github.com/user-attachments/assets/861342f9-f576-4083-8a3b-94570a970d58" width=200 controls autoplay loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/302e4e7d-781d-4a58-9b10-8c473d469c4b" width=200 controls autoplay loop></video> </td> <tr> <td> <video src="https://github.com/user-attachments/assets/3e70175c-40e5-4aec-b506-38006fe91a76" width=200 controls autoplay loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/b7da85f7-8b62-4d46-9460-7f0e505de810" width=200 controls autoplay loop></video> </td> </table>
147
- Results
148
- Side-by-Side Evaluation
149
- The evaluation is based on the expanded prompts from the Movie Gen benchmark.
150
-
151
- <table border="0" style="width: 400; text-align: left; margin-top: 20px;"> <tr> <td> <img src="assets/sbs/kandinsky_5_video_lite_vs_sora.jpg" width=400 ></img> </td> <td> <img src="assets/sbs/kandinsky_5_video_lite_vs_wan_2.1_14B.jpg" width=400 ></img> </td> <tr> <td> <img src="assets/sbs/kandinsky_5_video_lite_vs_wan_2.2_5B.jpg" width=400 ></img> </td> <td> <img src="assets/sbs/kandinsky_5_video_lite_vs_wan_2.2_A14B.jpg" width=400 ></img> </td> <tr> <td> <img src="assets/sbs/kandinsky_5_video_lite_vs_wan_2.1_1.3B.jpg" width=400 ></img> </td> </table>
152
- Distill Side-by-Side Evaluation
153
- <table border="0" style="width: 400; text-align: left; margin-top: 20px;"> <tr> <td> <img src="assets/sbs/kandinsky_5_video_lite_5s_vs_kandinsky_5_video_lite_distill_5s.jpg" width=400 ></img> </td> <td> <img src="assets/sbs/kandinsky_5_video_lite_10s_vs_kandinsky_5_video_lite_distill_10s.jpg" width=400 ></img> </td> </table>
154
- VBench Results
155
- <div align="center"> <picture> <img src="assets/vbench.png"> </picture> </div>
156
- Beta Testing
157
- You can apply to participate in the beta testing of the Kandinsky Video Lite via the telegram bot.
158
-
159
  ```bibtex
160
  @misc{kandinsky2025,
161
- author = {Alexey Letunovskiy, Maria Kovaleva, Ivan Kirillov, Lev Novitskiy, Denis Koposov,
162
- Dmitrii Mikhailov, Anna Averchenkova, Andrey Shutkin, Julia Agafonova, Olga Kim,
163
- Anastasiia Kargapoltseva, Nikita Kiselev, Vladimir Arkhipkin, Vladimir Korviakov,
164
- Nikolai Gerasimenko, Denis Parkhomenko, Anna Dmitrienko, Anastasia Maltseva,
165
- Kirill Chernyshev, Ilia Vasiliev, Viacheslav Vasilev, Vladimir Polovnikov,
166
- Yury Kolabushin, Alexander Belykh, Mikhail Mamaev, Anastasia Aliaskina,
167
- Tatiana Nikulina, Polina Gavrilova, Denis Dimitrov},
168
  title = {Kandinsky 5.0: A family of diffusion models for Video & Image generation},
169
- howpublished = {\url{https://github.com/ai-forever/Kandinsky-5}},
170
  year = 2025
171
  }
172
-
173
- @misc{mikhailov2025nablanablaneighborhoodadaptiveblocklevel,
174
- title={$\nabla$NABLA: Neighborhood Adaptive Block-Level Attention},
175
- author={Dmitrii Mikhailov and Aleksey Letunovskiy and Maria Kovaleva and Vladimir Arkhipkin
176
- and Vladimir Korviakov and Vladimir Polovnikov and Viacheslav Vasilev
177
- and Evelina Sidorova and Denis Dimitrov},
178
- year={2025},
179
- eprint={2507.13546},
180
- archivePrefix={arXiv},
181
- primaryClass={cs.CV},
182
- url={https://arxiv.org/abs/2507.13546},
183
- }
184
- ```
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
  <div align="center">
6
  <picture>
7
  <img src="assets/KANDINSKY_LOGO_1_BLACK.png">
 
9
  </div>
10
 
11
  <div align="center">
12
+ <a href="https://habr.com/ru/companies/sberbank/articles/951800/">Habr</a> |
13
+ <a href="https://ai-forever.github.io/Kandinsky-5/">Project Page</a> |
14
+ <a href="https://github.com/kandinskylab/kandinsky-5/blob/main/paper.pdf">Technical Report</a> |
15
+ <a href="https://github.com/ai-forever/Kandinsky-5">Original GitHub</a> |
16
+ <a href="https://huggingface.co/collections/kandinskylab/kandinsky-50-image-lite-diffusers">🤗 Diffusers</a>
17
  </div>
18
 
19
  -----
20
 
21
+ <h1>Kandinsky 5.0 T2I Lite SFT Diffusers</h1>
22
+
23
+ Kandinsky 5.0 is a family of diffusion models for video and image generation.
24
 
25
+ Kandinsky 5.0 Image Lite is a lightweight text-to-image (T2I) generation model with 6B parameters.
26
 
27
+ The model introduces several key innovations:
28
+ - **Latent diffusion pipeline** with **Flow Matching** for improved training stability
29
+ - **Diffusion Transformer (DiT)** as the main generative backbone with cross-attention to text embeddings
30
+ - Dual text encoding using **Qwen2.5-VL** and **CLIP** for comprehensive text understanding
31
+ - **Flux VAE** for efficient image encoding and decoding
32
 
33
+ The original codebase can be found at [kandinskylab/Kandinsky-5](https://github.com/kandinskylab/Kandinsky-5).
 
34
 
 
35
 
36
+ ## Available Models
37
 
38
+ Kandinsky 5.0 Image Lite:
39
+ | model_id | Description | Use Cases |
40
+ |------------|-------------|-----------|
41
+ | **<a href="https://huggingface.co/kandinskylab/Kandinsky-5.0-T2I-Lite-sft-Diffusers">kandinskylab/Kandinsky-5.0-T2I-Lite-sft-Diffusers</a>** | 6B supervised fine-tuned text-to-image model | Highest generation quality |
42
+ | **<a href="https://huggingface.co/kandinskylab/Kandinsky-5.0-I2I-Lite-sft-Diffusers">kandinskylab/Kandinsky-5.0-I2I-Lite-sft-Diffusers</a>** | 6B supervised fine-tuned image-to-image editing model | Highest generation quality |
43
+ | **<a href="https://huggingface.co/kandinskylab/Kandinsky-5.0-T2I-Lite-pretrain-Diffusers">kandinskylab/Kandinsky-5.0-T2I-Lite-pretrain-Diffusers</a>** | 6B base pretrained text-to-image model | Research and fine-tuning |
44
+ | **<a href="https://huggingface.co/kandinskylab/Kandinsky-5.0-I2I-Lite-pretrain-Diffusers">kandinskylab/Kandinsky-5.0-I2I-Lite-pretrain-Diffusers</a>** | 6B base pretrained image-to-image editing model | Research and fine-tuning |
45
 
 
 
 
 
46
 
47
+ ## Examples
48
+
49
+ <table border="0" style="width: 90%; text-align: left; margin-top: 20px;">
50
+ <tr>
51
+ <td>
52
+ <img src="assets/generation_examples/images/1.jpg" width=90% >
53
+ </td>
54
+ <td>
55
+ <img src="assets/generation_examples/images/2.jpg" width=90% >
56
+ </td>
57
+ <td>
58
+ <img src="assets/generation_examples/images/9.jpg" width=90% >
59
+ </td>
60
+ <tr>
61
+ </table>
62
+ <table border="0" style="width: 90%; text-align: left; margin-top: 10px;">
63
+ <td>
64
+ <img src="assets/generation_examples/images/4.jpg" width=90% >
65
+ </td>
66
+ <td>
67
+ <img src="assets/generation_examples/images/5.jpg" width=90% >
68
+ </td>
69
+ <td>
70
+ <img src="assets/generation_examples/images/3.jpg" width=90% >
71
+ </td>
72
+
73
+ </table>
74
+ <table border="0" style="width: 90%; text-align: left; margin-top: 10px;">
75
+ <td>
76
+ <img src="assets/generation_examples/images/7.jpg" width=90% >
77
+ </td>
78
+ <td>
79
+ <img src="assets/generation_examples/images/8.jpg" width=90% >
80
+ </td>
81
+ <td>
82
+ <img src="assets/generation_examples/images/6.jpg" width=90% >
83
+ </td>
84
+
85
+ </table>
86
+
87
+ ## Kandinsky5T2IPipeline Usage Example
88
 
 
89
  ```python
90
  import torch
91
+ from diffusers import Kandinsky5T2IPipeline
 
92
 
93
  # Load the pipeline
94
+ model_id = "kandinskylab/Kandinsky-5.0-T2I-Lite-sft-Diffusers"
95
+ pipe = Kandinsky5T2IPipeline.from_pretrained(model_id)
96
+ _ = pipe.to(device="cuda", dtype=torch.bfloat16)
 
 
97
 
98
+ # Generate image
99
+ prompt = "A fluffy, expressive cat wearing a bright red hat with a soft, slightly textured fabric. The hat should look cozy and well-fitted on the cat’s head. On the front of the hat, add clean, bold white text that reads “SWEET”, clearly visible and neatly centered. Ensure the overall lighting highlights the hat’s color and the cat’s fur details."
 
100
 
101
  output = pipe(
102
  prompt=prompt,
103
+ negative_prompt="",
104
+ height=1024,
105
+ width=1024,
 
106
  num_inference_steps=50,
107
+ guidance_scale=3.5,
108
+ ).image[0]
 
 
 
109
  ```
110
 
111
+ ## Results
112
+
113
+ <table style="width:100%; text-align:center; margin-top:20px;">
114
+ <tr>
115
+ <td>
116
+ <img src="assets/sbs_image.png" width="100%">
117
+ </td>
118
+ <td>
119
+ <img src="assets/sbs_edit.png" width="100%">
120
+ </td>
121
+ </tr>
122
+ <tr>
123
+ <td style="font-size: 1.1em; font-weight: 500; padding-top: 6px;">
124
+ Side-by-side evaluation of T2I on PartiPrompts with extended prompts
125
+ </td>
126
+ <td style="font-size: 1.1em; font-weight: 500; padding-top: 6px;">
127
+ Side-by-side evaluation of I2I on the Flux Kontext benchmark with extended prompts
128
+ </td>
129
+ </tr>
130
+ </table>
131
+
132
+
133
+ ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
134
  ```bibtex
135
  @misc{kandinsky2025,
136
+ author = {Alexander Belykh and Alexander Varlamov and Alexey Letunovskiy and Anastasia Aliaskina and Anastasia Maltseva and Anastasiia Kargapoltseva and Andrey Shutkin and Anna Averchenkova and Anna Dmitrienko and Bulat Akhmatov and Denis Dimitrov and Denis Koposov and Denis Parkhomenko and Dmitrii and Ilya Vasiliev and Ivan Kirillov and Julia Agafonova and Kirill Chernyshev and Kormilitsyn Semen and Lev Novitskiy and Maria Kovaleva and Mikhail Mamaev and Mikhailov and Nikita Kiselev and Nikita Osterov and Nikolai Gerasimenko and Nikolai Vaulin and Olga Kim and Olga Vdovchenko and Polina Gavrilova and Polina Mikhailova and Tatiana Nikulina and Viacheslav Vasilev and Vladimir Arkhipkin and Vladimir Korviakov and Vladimir Polovnikov and Yury Kolabushin},
 
 
 
 
 
 
137
  title = {Kandinsky 5.0: A family of diffusion models for Video & Image generation},
138
+ howpublished = {\url{https://github.com/kandinskylab/Kandinsky-5}},
139
  year = 2025
140
  }
141
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
assets/KANDINSKY_LOGO_1_BLACK.png ADDED
assets/generation_examples/images/1.jpg ADDED

Git LFS Details

  • SHA256: 4888ff821aab23687404a4f32b3ab3246631ea031b574775b609f95bc353cd48
  • Pointer size: 131 Bytes
  • Size of remote file: 298 kB
assets/generation_examples/images/2.jpg ADDED

Git LFS Details

  • SHA256: cfb86ae5dec2f49426d6c19fdd051b70859b1f0252ab88c21556b233cb7e0b43
  • Pointer size: 131 Bytes
  • Size of remote file: 215 kB
assets/generation_examples/images/3.jpg ADDED

Git LFS Details

  • SHA256: 97c3d1e801665a2d22352ea19e873293b87da24520646289e81d96fb3d076d21
  • Pointer size: 131 Bytes
  • Size of remote file: 113 kB
assets/generation_examples/images/4.jpg ADDED

Git LFS Details

  • SHA256: 64a0bce53cf94b53310afeb83100dd5d2567aa115bf4ae177e100cb58f88b779
  • Pointer size: 131 Bytes
  • Size of remote file: 165 kB
assets/generation_examples/images/5.jpg ADDED

Git LFS Details

  • SHA256: 9c7fec66fd00e0d4debd16007cad9b5d9486bc550895096be4d6f27432aa5d46
  • Pointer size: 131 Bytes
  • Size of remote file: 219 kB
assets/generation_examples/images/6.jpg ADDED

Git LFS Details

  • SHA256: 583ec4fd596327dec28cf81539dd7e156e36e509051dd4d2d32acd85c0a6e566
  • Pointer size: 131 Bytes
  • Size of remote file: 116 kB
assets/generation_examples/images/7.jpg ADDED

Git LFS Details

  • SHA256: c2a19bcbdea09028b91c33704f87a040011d92741acbf18aaa665d1d359b516b
  • Pointer size: 131 Bytes
  • Size of remote file: 111 kB
assets/generation_examples/images/8.jpg ADDED

Git LFS Details

  • SHA256: 2fa7260f114d615d464df062ff4e4c2b2a0311779f7377340d663f92a8105fbb
  • Pointer size: 131 Bytes
  • Size of remote file: 200 kB
assets/generation_examples/images/9.jpg ADDED

Git LFS Details

  • SHA256: 2b32f2e5c2049bf0e432a9e3bfedd5db3c74ed1b0dbfb277219a21682bb9a138
  • Pointer size: 131 Bytes
  • Size of remote file: 218 kB
assets/sbs_edit.png ADDED

Git LFS Details

  • SHA256: 80098a22c2d8a33786efd780541eb6bb0624c54d58e10420445d5049d786e3d7
  • Pointer size: 131 Bytes
  • Size of remote file: 221 kB
assets/sbs_image.png ADDED

Git LFS Details

  • SHA256: ca5d57a61d6181768a4598fef7972406c5788025c34e1cb6a334d6a8e6a3c3d1
  • Pointer size: 131 Bytes
  • Size of remote file: 225 kB