nielsr HF Staff commited on
Commit
a294ed9
·
verified ·
1 Parent(s): b5df326

Add pipeline tag and update Block-Sparse Attention note

Browse files

This PR improves the model card for FlashVSR by:

- Adding the `pipeline_tag: image-to-image` to the metadata, which enhances the model's discoverability on the Hugging Face Hub under the relevant task category (Video Super-Resolution).
- Updating the "3️⃣ Install Block-Sparse Attention (Required)" section with more detailed information, including an important `⚠️ Note` from the GitHub repository regarding GPU compatibility. This provides clearer guidance for users on optimal hardware usage.

The existing structure, links to the paper (via arXiv), project page, and usage instructions are maintained.

Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
  license: apache-2.0
 
3
  ---
 
4
  # ⚡ FlashVSR
5
 
6
  **Towards Real-Time Diffusion-Based Streaming Video Super-Resolution**
@@ -68,7 +70,7 @@ pip install -r requirements.txt
68
 
69
  #### 3️⃣ Install Block-Sparse Attention (Required)
70
 
71
- FlashVSR **requires** the **Block-Sparse Attention** backend for inference:
72
 
73
  ```bash
74
  git clone https://github.com/mit-han-lab/Block-Sparse-Attention
@@ -77,6 +79,7 @@ pip install packaging
77
  pip install ninja
78
  python setup.py install
79
  ```
 
80
 
81
  #### 4️⃣ Download Model Weights from Hugging Face
82
 
@@ -167,4 +170,4 @@ We gratefully acknowledge the following open-source projects:
167
  primaryClass={cs.CV},
168
  url={https://arxiv.org/abs/2510.12747},
169
  }
170
- ```
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: image-to-image
4
  ---
5
+
6
  # ⚡ FlashVSR
7
 
8
  **Towards Real-Time Diffusion-Based Streaming Video Super-Resolution**
 
70
 
71
  #### 3️⃣ Install Block-Sparse Attention (Required)
72
 
73
+ FlashVSR relies on the **Block-Sparse Attention** backend to enable flexible and dynamic attention masking for efficient inference.
74
 
75
  ```bash
76
  git clone https://github.com/mit-han-lab/Block-Sparse-Attention
 
79
  pip install ninja
80
  python setup.py install
81
  ```
82
+ **⚠️ Note:** The Block-Sparse Attention backend currently achieves ideal acceleration only on NVIDIA A100 or A800 GPUs (Ampere architecture). On H100/H800 (Hopper) GPUs, due to differences in hardware scheduling and sparse kernel behavior, the expected speedup may not be realized, and in some cases performance can even be slower than dense attention.
83
 
84
  #### 4️⃣ Download Model Weights from Hugging Face
85
 
 
170
  primaryClass={cs.CV},
171
  url={https://arxiv.org/abs/2510.12747},
172
  }
173
+ ```