WF with no length limit/guidance/character consistency for Rapid AIO Vx and MEGA Vx
Easy to use "unlimited length" workflows for VX and Mega VX:
Download "RapidAIO Mega" Workflow (V2.5): https://drive.google.com/file/d/1jOkDFawU2ZBSiLDrwe0bZl-dKWlkrLId/view?usp=sharing
Download "RapidAIO" Workflow (V1.1): https://drive.google.com/file/d/1pEW2poIoXWqDTbMv-08v06bP2f53QdZZ/view?usp=sharing
Not so easy to use but advanced "unlimited length" workflows for Mega VX:
Three instead of one input image for guiding your final video and keep the character more consistent. You can add tags now to each prompt to tell the WF to use one of 3 possible images as end frame for each batch!!! That helps you guide your video and keep the character more consistent! All images should have the exact same size and background with the same character in a different pose for best results.
Download "RapidAIO Mega" Workflow (V3.0): https://drive.google.com/file/d/1I1lychrF2kdlPxLkETbSLp93DBr8OC70/view?usp=sharing
Download "RapidAIO Mega" Workflow (V4.0): https://drive.google.com/file/d/1DriKTI_2CHr-LZFr85o1x5m4BDiAJiAM/view?usp=sharing
Same logic as V3.0 but select as many images as you like. If you want to end a batch with one of the images from the list simply add '<< N-U-M-B-E-R >>' to the prompt. For image number 3 as endframe the tag would be <<3>>. Example prompt:
The man swings his Axe.<<3>>
Do not set any tag for a prompt if you want to let the KI decide how the batch will end.
Download "RapidAIO Mega" Workflow (V4.2): https://drive.google.com/file/d/1xx8Zrx1mKbzu6inleVRSq5ssCMaA5-T9/view?usp=sharing
Same as 4.0 but when endframe provided only that image will be reference as starting image of next batch. The result is that the character is 100% back in terms of consistency. The drawback is a not so smooth transistion between those two batches. Also you can add a random end frame tag which will add a random end frame of the images of the list to your batch.
----General notes----
- Only change values in the "Input" group!!!
- Keep in mind that the "RapidAIO" WF only takes the last image from the last batch as starting image for the next batch. So forget about character consistence. Its not possible with that model.
- The "RapidAIO Mega" takes the last 12 images from the last batch as guidance for the next batch = a little better character consistence.
--Settings worth mentioning--
Megapixel:
Source image is rescaled to "Megapixel". Resolution of the video will be set to the rescaled width/height of the first image (slighty corrected to be dividable by 16). Use 0.5 for fast results or lower for eyecancer. 0.8 is a good value for later upscaling. Everything above not good with 16GB VRAM or less.
Seconds:
Is multiplied by 16 + 1. The +1 is needed!!! Length for WanVaceToVideo is in 4 Step length (1, 5 ,9 ,13, 17, ...).
--Stuff im working on--
- A wan 2.2 animate version for RapidAIO Mega
Sounds good - is it likely to play ball with 12gb VRAM? Based on what you've said above, it sounds like it could be a challenge but I suppose with a few tweaks here and there...
The multi-prompts box is interesting - can individual LoRAs be loaded into each subgraph to essential evolve the prompt? And is it designed so that the RapidAIO Batch subgraph can be copied to extend or are all the extensions within the one subgraph?
If the WAN2.2-14B-Rapid-AllInOne runs with other workflows on your system, this workflow will do so too. It cleans the VRAM before and after VAE decode in each batch. The limitations are the megapixel size and length of each batch. With my RTX 4080 with 16GB VRAM and 64GB RAM I can do up to 5 seconds/batch at 0.8 megapixel and up to 10 seconds/batch at 0.5 megapixel without "out of VRAM crash". Additional loras could be placed within the workflow (I guess - never tested it).
I do not understand what you mean by your last question.
Thanks for your reply π
What I meant was that, when you say 'unlimited length', do you mean that this workflow generates and stitches multiple shorter videos together using one 'RapidAIO Batch' subgraph for each part of the video? Or have I misinterpreted what you meant here?
This is brilliant and exactly what I was looking for! In my testing so far my PC becomes sluggish and unusable after only 1 or 2 video generations. Is there possibly some resource cleanup that still needs to be done after the video is done generating?
Thanks for your reply π
What I meant was that, when you say 'unlimited length', do you mean that this workflow generates and stitches multiple shorter videos together using one 'RapidAIO Batch' subgraph for each part of the video? Or have I misinterpreted what you meant here?
Yes. If you enter for example seconds = 5 and 4 prompts seperated by "return" you get a video with a total length of 5s* 4 prompts = 20 seconds video. The workflow uses the last 12 frames of the last batch as guidance for the next batch.
This is brilliant and exactly what I was looking for! In my testing so far my PC becomes sluggish and unusable after only 1 or 2 video generations. Is there possibly some resource cleanup that still needs to be done after the video is done generating?
What do you mean exactly. There is a cleanup in each batch before and after VAE Decode. That should be enough. I only tested 3 prompts/batches in total and it works without any problems. Adding more prompts/batches should not change that.
Pinning this as people may find this pretty useful and contributes to the "ease of use" I'm going for. Thank you!
My recommendation, if possible, is to make the starting image optional so it handles T2V too.
FINALLY, I FIXED IT. i have updated the AMD chipset driver and updated the bios, thats it. but i am running into anothe issue now, i am trying to test ver 7 which said that i should use the v3 workflow the workflow gives missing WanVideoVACEStartToEndFrame . i have a 3080 10GB, what is the best one to try now guys. shall i try the latest v9 or the mega v9 or which is the fastest version
Is the workflow for the WAN Animate version still in development?
@okims
Right now I do not have the time for it. I have one more WF finsihed (but not published on this page) where I combine the Vx version with the MEGAx version. So one batch contains of two parts. Vx followed by MEGAx. The goal was to use the Vx vesion (which seems to be still prefered by many users) with the abilitly to provide end frames.
I wrapped the whole process into one standalone node. Feel free to test it!




