|
|
--- |
|
|
library_name: sana |
|
|
tags: |
|
|
- text-to-image |
|
|
- Sana |
|
|
- 1024px_based_image_size |
|
|
- Multi-language |
|
|
- ControlNet |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
base_model: |
|
|
- Efficient-Large-Model/Sana_1600M_1024px_BF16_ControlNet_HED |
|
|
pipeline_tag: text-to-image |
|
|
--- |
|
|
|
|
|
<p align="center" style="border-radius: 10px"> |
|
|
<img src="https://raw.githubusercontent.com/NVlabs/Sana/refs/heads/main/asset/logo.png" width="35%" alt="logo"/> |
|
|
</p> |
|
|
|
|
|
<div style="display:flex;justify-content: center"> |
|
|
<a href="https://huggingface.co/collections/Efficient-Large-Model/sana-673efba2a57ed99843f11f9e"><img src="https://img.shields.io/static/v1?label=Demo&message=Huggingface&color=yellow"></a>   |
|
|
<a href="https://github.com/NVlabs/Sana"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a>   |
|
|
<a href="https://nvlabs.github.io/Sana/"><img src="https://img.shields.io/static/v1?label=Project&message=Github&color=blue&logo=github-pages"></a>   |
|
|
<a href="https://hanlab.mit.edu/projects/sana/"><img src="https://img.shields.io/static/v1?label=Page&message=MIT&color=darkred&logo=github-pages"></a>   |
|
|
<a href="https://arxiv.org/abs/2410.10629"><img src="https://img.shields.io/static/v1?label=Arxiv&message=Sana&color=red&logo=arxiv"></a>   |
|
|
<a href="https://nv-sana.mit.edu/"><img src="https://img.shields.io/static/v1?label=Demo&message=MIT&color=yellow"></a>   |
|
|
<a href="https://discord.gg/rde6eaE5Ta"><img src="https://img.shields.io/static/v1?label=Discuss&message=Discord&color=purple&logo=discord"></a>   |
|
|
</div> |
|
|
|
|
|
# Model card |
|
|
|
|
|
We incorporate a ControlNet-like(https://github.com/lllyasviel/ControlNet) module enables fine-grained control over text-to-image diffusion models. |
|
|
We implement a ControlNet-Transformer architecture, specifically tailored for Transformers, achieving explicit controllability alongside high-quality image generation. |
|
|
|
|
|
|
|
|
Source code is available at https://github.com/NVlabs/Sana. |
|
|
|
|
|
|
|
|
|
|
|
<img src="https://nvlabs.github.io/Sana/asset/content/controlnet/sana_controlnet.jpg" width=640> |
|
|
|
|
|
## How to Use |
|
|
|
|
|
refer to [Sana-ControlNet Guidance](https://raw.githubusercontent.com/NVlabs/Sana/refs/heads/main/asset/controlnet/controlnet_app.jpg) for more details. |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from PIL import Image |
|
|
from app.sana_controlnet_pipeline import SanaControlNetPipeline |
|
|
|
|
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
|
|
|
pipe = SanaControlNetPipeline("configs/sana_controlnet_config/Sana_1600M_1024px_controlnet_bf16.yaml") |
|
|
pipe.from_pretrained("hf://Efficient-Large-Model/Sana_1600M_1024px_BF16_ControlNet_HED/checkpoints/Sana_1600M_1024px_BF16_ControlNet_HED.pth") |
|
|
|
|
|
ref_image = Image.open("asset/controlnet/ref_images/A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a la.jpg") |
|
|
prompt = "A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape." |
|
|
|
|
|
images = pipe( |
|
|
prompt=prompt, |
|
|
ref_image=ref_image, |
|
|
guidance_scale=4.5, |
|
|
num_inference_steps=10, |
|
|
sketch_thickness=2, |
|
|
generator=torch.Generator(device=device).manual_seed(0), |
|
|
) |
|
|
``` |
|
|
|
|
|
### Model Description |
|
|
|
|
|
- **Developed by:** NVIDIA, Sana |
|
|
- **Model type:** Linear-Diffusion-Transformer-based text-to-image generative model, ControlNet |
|
|
- **Model size:** 2B parameters |
|
|
- **Model resolution:** This model is developed to generate 1024px based images with multi-scale heigh and width. |
|
|
- **License:** [NSCL v2-custom](./LICENSE.txt). Governing Terms: NVIDIA License. Additional Information: [Gemma Terms of Use | Google AI for Developers](https://ai.google.dev/gemma/terms) for Gemma-2-2B-IT, [Gemma Prohibited Use Policy | Google AI for Developers](https://ai.google.dev/gemma/prohibited_use_policy). |
|
|
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. |
|
|
It is a Linear Diffusion Transformer that uses one fixed, pretrained text encoders ([Gemma2-2B-IT](https://huggingface.co/google/gemma-2-2b-it)) |
|
|
and one 32x spatial-compressed latent feature encoder ([DC-AE](https://hanlab.mit.edu/projects/dc-ae)). |
|
|
- **Special:** This model is fine-tuned from the base model [Efficient-Large-Model/Sana_1600M_1024px_BF16](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_BF16) and it supports HED ControlNet. |
|
|
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/NVlabs/Sana) and the [Sana report on arXiv](https://arxiv.org/abs/2410.10629). |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
For research purposes, we recommend our `generative-models` Github repository (https://github.com/NVlabs/Sana), |
|
|
which is more suitable for both training and inference and for which most advanced diffusion sampler like Flow-DPM-Solver is integrated. |
|
|
[MIT Han-Lab](https://nv-sana.mit.edu/) provides free Sana inference. |
|
|
- **Repository:** ttps://github.com/NVlabs/Sana |
|
|
- **Demo:** https://nv-sana.mit.edu/ |
|
|
|
|
|
|
|
|
## Uses |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
The model is intended for research purposes only. Possible research areas and tasks include |
|
|
|
|
|
- Generation of artworks and use in design and other artistic processes. |
|
|
- Applications in educational or creative tools. |
|
|
- Research on generative models. |
|
|
- Safe deployment of models which have the potential to generate harmful content. |
|
|
|
|
|
- Probing and understanding the limitations and biases of generative models. |
|
|
|
|
|
Excluded uses are described below. |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. |
|
|
|
|
|
## Limitations and Bias |
|
|
|
|
|
### Limitations |
|
|
|
|
|
|
|
|
- The model does not achieve perfect photorealism |
|
|
- The model cannot render complex legible text |
|
|
- fingers, .etc in general may not be generated properly. |
|
|
- The autoencoding part of the model is lossy. |
|
|
|
|
|
### Bias |
|
|
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. |