Is there a way to generate longer video? (no loop)?

#198
by rexnguyen - opened

Hi, I'm new to AI and Wan2.2. I am using the V10 version (not Mega) since the Mega changes the face of the character in the first frame. I have tried to increase the length, but the video starts to loop. Is there any way to generate a longer video, or should I increase the batch size? Thank you

batch size means you sample in parallel, do not use
Why does it loop? Maybe you can use i2v, but it degrades quality

batch size means you sample in parallel, do not use
Why does it loop? Maybe you can use i2v, but it degrades quality

Thanks. Yeah, I'm using the i2v one. Is Length ~110 something the limit of Wan? The video runs normally, but the character starts to revert their action, like if he stands up, then after the limit, he sits down again 😅

WAN's typical "context window" is about 81 frames. Going higher than that starts to reverts to the original scene or prompt. You can use "start to last" frame or make multiple videos using the last frame as the first for the next video. This is best done with the "mega" model. If you are having consistency problems with "mega", make sure you are using the correct workflow configuration.

WAN's typical "context window" is about 81 frames. Going higher than that starts to reverts to the original scene or prompt. You can use "start to last" frame or make multiple videos using the last frame as the first for the next video. This is best done with the "mega" model. If you are having consistency problems with "mega", make sure you are using the correct workflow configuration.

Thanks for your explanation and your tip.

As for the inconsistency issues I was actually using Mega V11 (V12 got GPU issue) and the mega v3 workflow. I use the Non-NSFW version & add custom Loras into it using Lora Loader Model Only after Load Checkpoint and before ModelSamplingSD3. Then just change the image of Start Frame, and generate. By the end, the face of the character looks different.

But if I use V10 Non-NSFW version & add Loras the same way for the default workflow wan2.2-i2v-rapid-aio-example.json, the character face actually looks consistent till the end.

So I'm wondering should I maybe generate another picture as the "Last Frame" for the Mega model so the character to be consistent. Thanks!

Many LoRAs have their version of a face or body even though they might be meant for body parts/clothing or an action.
Last frames should solve your problem (you should try Qwen Image Edit + Qwen Next Scene LoRA to do that). An alternative is to train a WAN LoRA for your character, but it will merge with your other LoRAs if those affect. You will need to minimize the weight of the other LoRAs and keep your LoRA high.

Sign up or log in to comment