# SkyReelsV2Transformer3DModel

A Diffusion Transformer model for 3D video-like data was introduced in [SkyReels-V2](https://github.com/SkyworkAI/SkyReels-V2) by the Skywork AI.

The model can be loaded with the following code snippet.

```python
from diffusers import SkyReelsV2Transformer3DModel

transformer = SkyReelsV2Transformer3DModel.from_pretrained("Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## SkyReelsV2Transformer3DModel[[diffusers.SkyReelsV2Transformer3DModel]]

#### diffusers.SkyReelsV2Transformer3DModel[[diffusers.SkyReelsV2Transformer3DModel]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/models/transformers/transformer_skyreels_v2.py#L518)

A Transformer model for video-like data used in the Wan-based SkyReels-V2 model.

**Parameters:**

patch_size (`tuple[int]`, defaults to `(1, 2, 2)`) : 3D patch dimensions for video embedding (t_patch, h_patch, w_patch).

num_attention_heads (`int`, defaults to `16`) : Fixed length for text embeddings.

attention_head_dim (`int`, defaults to `128`) : The number of channels in each head.

in_channels (`int`, defaults to `16`) : The number of channels in the input.

out_channels (`int`, defaults to `16`) : The number of channels in the output.

text_dim (`int`, defaults to `4096`) : Input dimension for text embeddings.

freq_dim (`int`, defaults to `256`) : Dimension for sinusoidal time embeddings.

ffn_dim (`int`, defaults to `8192`) : Intermediate dimension in feed-forward network.

num_layers (`int`, defaults to `32`) : The number of layers of transformer blocks to use.

window_size (`tuple[int]`, defaults to `(-1, -1)`) : Window size for local attention (-1 indicates global attention).

cross_attn_norm (`bool`, defaults to `True`) : Enable cross-attention normalization.

qk_norm (`str`, *optional*, defaults to `"rms_norm_across_heads"`) : Enable query/key normalization.

eps (`float`, defaults to `1e-6`) : Epsilon value for normalization layers.

inject_sample_info (`bool`, defaults to `False`) : Whether to inject sample information into the model.

image_dim (`int`, *optional*) : The dimension of the image embeddings.

added_kv_proj_dim (`int`, *optional*) : The dimension of the added key/value projection.

rope_max_seq_len (`int`, defaults to `1024`) : The maximum sequence length for the rotary embeddings.

pos_embed_seq_len (`int`, *optional*) : The sequence length for the positional embeddings.

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

#### diffusers.models.modeling_outputs.Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/models/modeling_outputs.py#L21)

The output of [Transformer2DModel](/docs/diffusers/v0.37.1/en/api/models/transformer2d#diffusers.Transformer2DModel).

**Parameters:**

sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/v0.37.1/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) : The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability distributions for the unnoised latent pixels.

