$0+
I want this!

The Motion Clone Engine - Video-to-Video AI Studio

Prompting movement is guessing. Cloning movement is directing.


If you want your AI character to perform a specific viral dance, a complex martial arts move, or a subtle acting performance, text prompts aren't enough. You need precision.

Introducing The Motion Clone Engine.

This is a professional Video-to-Video workflow designed for ComfyUI. It uses the cutting-edge SCAIL (Subject-Conditioned Animation) architecture combined with DWPose to extract the exact skeletal motion from a source video and map it onto your static character image.

The result is a perfect clone of the movement, with your character's face, outfit, and identity locked in tight.

Choose Your Power Level

I have engineered three versions of this engine to fit your hardware and production needs.

πŸ”₯ V2: The Ghost Edition (Flagship)

The unrestricted, high-performance production suite.

  • πŸ‘₯ Multi-Person Tracking: The only version capable of tracking and animating groups (2-6 people) simultaneously.
  • ⚑ Lightning Speed: Optimized with Torch Compile and Lightning LoRAs to render significantly faster than standard workflows.
  • 🎞️ Cinematic Polish: Includes built-in RIFE Smoothing for fluid video and the exclusive "Director's View" combined output.
  • βš™οΈ Max Stability: Includes Block Swap for advanced memory management.


πŸ› οΈ V1: The Agent Edition (Pro)

The workhorse for single-subject creators.

  • βœ… Core Motion Cloning: High-quality Wan 2.1 + SCAIL generation.
  • βœ… Single-Person Focus: Optimized for tracking one subject perfectly.
  • βœ… Standard Speed: Reliable generation without the advanced compilation accelerators.


πŸ§ͺ V0.5: The Lab Assistant (Free)

The entry point to test the technology.

  • βœ… Raw Generation: Access the core motion transfer capability to test how SCAIL works.
  • ❌ Limitations: No post-processing, no speed optimizations, no RIFE, and restricted to single-person inputs.



A Look Inside The Workflow


1. The Input Console
Upload your source video (the motion) and your target image (the character). The workflow handles the resizing logic for you.

  • Note: Video dimensions must be divisible by 32.


2. Pose Extraction
Using DWPose, the engine strips away the background and isolates the skeletal data. In the Ghost version, you can toggle between Single and Multi-person detection here.

3. The SCAIL Engine
This is the secret sauce. Unlike standard ControlNet, SCAIL conditions the video generation on the subject's identity, ensuring your character doesn't morph into the background during fast motion.



✨ Get This Workflow & Dozens More in The Midnight Lab ✨

While you can purchase this workflow individually, the absolute best value is a membership to The Midnight Lab.

The Midnight Lab is my all-access subscription service where you get my entire library of professional-grade tools (including the Ghost Edition of this workflow) for a single monthly price.


>> Click Here to Explore The Midnight Lab Membership <<


⚠️ Important Note Before Buying

Please download the Free (V0.5) Edition first.

This allows you to test your hardware compatibility and get comfortable with the core workflow logic at no cost.

The Ghost Edition (V2) is a complex, high-performance system designed for advanced ComfyUI users. It requires significant system knowledge of managing custom nodes/dependencies. Please only purchase the flagship version if you are confident in your setup.

$
I want this!
Watch link provided after purchase

The definitive workflow for cloning real-world motion onto AI characters. Powered by Wan 2.1 & SCAIL for perfect consistency. Available in 3 versions.

No refunds allowed
Powered by