Introducing RunwayML’s Act-Two: Revolutionizing AI-Driven Character Animation

Introduction

In the rapidly evolving world of artificial intelligence, RunwayML continues to push the boundaries of creative technology. On July 16, 2025, the company unveiled Act-Two, a groundbreaking motion capture model that promises to redefine how creators animate characters. Whether you’re a filmmaker, game developer, or independent artist, Act-Two offers an intuitive, powerful, and accessible way to bring expressive, lifelike characters to life using just a single performance video and a reference image or video.

This blog post dives deep into Act-Two’s features, benefits, and transformative potential, exploring why it’s a game-changer for the animation and entertainment industries.


What Is RunwayML’s Act-Two?

Act-Two is RunwayML’s next-generation motion capture AI, built to simplify and enhance character animation. Unlike traditional motion capture, which requires expensive equipment, complex rigging, and multi-step workflows, Act-Two leverages advanced AI to animate characters with unprecedented ease. By combining a driving performance video (e.g., an actor’s performance) with a character reference (image or video), Act-Two transfers nuanced movements, facial expressions, and even hand gestures to create high-fidelity animations. Available on the Gen-4 Video model, Act-Two supports a wide range of character types, from human to non-human, and styles like Pixar animations or claymations.


Key Features of Act-Two

Act-Two stands out for its robust feature set, designed to empower creators with precision and flexibility. Here’s a closer look at what makes it special:

  • Advanced Motion Tracking: Act-Two excels in capturing head, face, body, and hand movements, ensuring animations reflect the subtle nuances of an actor’s performance, such as micro-expressions, timing, and body language.
  • Gesture Control for Character Images: When using a character image as a reference, Act-Two’s gesture control setting allows creators to dictate hand and body movements via the driving performance, adding realism and control.
  • Environmental Motion: For character image inputs, Act-Two automatically adds subtle environmental motion, like a handheld camera shake, to create natural-looking shots in a single generation.
  • Versatility Across Styles: Whether animating a realistic human, a cartoon character, or a fantastical creature, Act-Two supports diverse styles and environments without sacrificing performance fidelity.
  • Seamless Integration: Available through Runway’s web app and API, Act-Two integrates easily into creative workflows, making it accessible for both individual creators and large studios.
  • Improved Fidelity Over Act-One: Compared to its predecessor, Act-Two delivers superior generation quality, enhanced physics, and full-body tracking, making it ideal for high-precision projects like feature films or VR experiences.

How Act-Two Works: A Step-by-Step Guide

Using Act-Two is remarkably straightforward, even for those new to AI animation. Here’s how to get started, based on Runway’s official guidance:

  1. Access the Platform: Log into your RunwayML account and ensure you’re subscribed to a Standard plan or higher, as Act-Two is currently available to Enterprise customers and Creative Partners, with broader access rolling out soon.
  2. Select the Gen-4 Video Model: Open a session in your dashboard and choose the Gen-4 Video model. Then, select the Act-Two icon in the bottom left corner.
  3. Upload or Record a Driving Performance: Drag and drop a performance video or record one directly in the web app. This video captures the movements, expressions, and audio you want to transfer to your character.
  4. Choose a Character Reference: Upload a custom character image or video, or select a preset. Images offer more control over gestures, while videos retain the original scene’s motion and environment.
  5. Adjust Settings: Configure gesture and facial expressiveness settings to fine-tune the animation. For example, enable gesture control for precise body movements or adjust motion intensity for more or less expressive output.
  6. Generate and Review: Click “Generate” to process your video. Once complete, review the output in your session and iterate as needed.

This streamlined workflow eliminates the need for motion capture suits, multiple cameras, or complex software, making Act-Two accessible to creators of all levels.


Benefits of Act-Two for Creators

Act-Two’s impact on animation and filmmaking is profound, offering several advantages:

  • Accessibility: By requiring only a performance video and reference character, Act-Two democratizes motion capture, enabling small studios and independent creators to produce Hollywood-quality animations without a massive budget.
  • Efficiency: The simplified pipeline reduces production time and costs, allowing creators to iterate quickly and bring projects to market faster.
  • Creative Freedom: With support for diverse character styles and environments, Act-Two empowers artists to experiment with unique designs and storytelling techniques.
  • High Fidelity: The model’s ability to capture subtle expressions and realistic physics ensures professional-grade results suitable for films, games, and immersive experiences.
  • API Integration: For developers, Act-Two’s availability via the Runway API allows seamless integration into apps, platforms, and workflows, expanding its use cases.

Act-Two vs. Competitors

The AI video generation space is competitive, with models like OpenAI’s Sora, Google’s Veo 3, and Midjourney’s V1 making waves. However, Act-Two carves a unique niche with its focus on motion capture and character animation. While Veo excels in cinematic video generation and Midjourney offers robust stylization, Act-Two’s precise tracking of head, face, body, and hands sets it apart for projects requiring lifelike human movement. For example, Google’s Flow is great for scene-building but lacks Act-Two’s detailed motion capture capabilities.

Compared to its predecessor, Act-One, Act-Two offers significant upgrades, including full-body tracking and improved animation quality, making it a more versatile tool for professional applications.


Real-World Applications

Act-Two’s versatility makes it a valuable tool across industries:

  • Animation and Film: Create lifelike characters for animated films or VFX shots with natural movements, reducing reliance on costly motion capture setups.
  • Gaming: Animate game characters with realistic expressions and gestures, enhancing player immersion.
  • Virtual Reality (VR): Develop immersive VR experiences with dynamic, expressive characters.
  • Education and Marketing: Transform educational content or brand campaigns by animating characters to deliver engaging, human-like performances.
  • VTubing and Content Creation: Enable VTubers and creators to animate avatars with minimal equipment, using just a webcam and reference image.

Pricing and Availability

Act-Two is currently available to RunwayML’s Enterprise customers and Creative Partners, with plans to expand access to all users in the near future. It requires a Standard plan or higher, and while exact pricing for Act-Two isn’t specified, Act-One’s cost (10 credits/second on Gen-3 Alpha, 5 credits/second on Gen-3 Alpha Turbo) suggests a similar credit-based model. For detailed pricing, visit Runway’s official website.


Challenges and Considerations

While Act-Two is a leap forward, it’s not without challenges. Generated videos may experience a noticeable loss in resolution, which could be an issue for projects requiring crisp visuals. Additionally, while Act-Two excels in static shots, full-body and dynamic camera motion are areas for future improvement, as noted by some community feedback. Creators should also be mindful of Runway’s usage policies, which prohibit generating content featuring public figures without permission.


The Future of Act-Two

RunwayML’s commitment to responsible AI development ensures Act-Two includes robust content moderation and safety measures, such as blocking unauthorized use of public figures’ likenesses. As the model evolves, we can expect further enhancements, such as support for dynamic camera angles and even more refined motion tracking. With partnerships like Lionsgate already in place, Act-Two is poised to shape the future of storytelling in film, gaming, and beyond.


Conclusion

RunwayML’s Act-Two is a revolutionary tool that bridges the gap between idea and execution, making high-fidelity character animation accessible to creators of all levels. Its advanced motion capture, versatile style support, and seamless workflow empower artists to tell compelling stories without the barriers of traditional motion capture. Whether you’re crafting a feature film, a game, or a viral VTuber video, Act-Two offers the tools to bring your vision to life.

Visit Runway’s website to explore Act-Two!

Leave a Reply