CONTACT

CONTACT

Reach Us

Reach Us

Have a general question?

Have a general question?

Email Us

Reach out to learn more and request a demo

Email Us

Reach out to learn more and request a demo

Investor Inquiries

Let's connect on how to invest

Investor Inquiries

Let's connect on how to invest

FAQ'S

Frequently Asked Questions

Tech differentiator: We preserve individual performer nuance. AI can replicate a face, but not a performance

Why can't I just use Runway Act Two?

The landmark vs. muscle movement issue: Runway Act Two tracks facial landmarks—key points that show gross movement like "mouth open" or "eyebrow raised." They capture the skeleton of performance. Greasepaint traces a full 3D model of the face, capturing every muscle movement between those landmarks. We get the subtle tightening around the eyes when hiding pain, the micro-tension in the jaw when holding back anger—the acting details that make performance feel real. The angle problem: Act Two uses a single reference image, so it approximates the actor's face. When the face turns or changes angle shot-to-shot, consistency breaks down. Even the mouth and teeth are wrong because they're guessed at, not the actor's actual features. Greasepaint creates a true 3D asset from the actor, so every angle is accurate and consistent. Performance fidelity: Act Two is great for motion capture, but all shots need to match the frontal video reference for accurate transfer. Greasepaint works in full 360-degree scene work with complete camera freedom.

What about Wan Animate?

Wan Animate is excellent for character animation and replacement, especially when you need to match environmental lighting. However, it has a critical technical limitation: it only works on frontal faces. When the angle changes, the animation breaks down and identity is lost. This makes it unusable for dynamic cinematography or any scene requiring camera movement and varied angles. Additional limitations: Wan Animate is trained on generic performance data, not individual actors. It can replicate what a performance looks like, but not how a specific performer expresses emotion. The Greasepaint difference: We create a true 3D asset of the actor that maintains identity and performance fidelity at any angle—profile, three-quarter, overhead, any camera position. Our Expression-Lock™ technology is trained exclusively on each individual actor, capturing how you specifically raise an eyebrow when suspicious, how you express heartbreak. Wan Animate gives you frontal-angle animation. Greasepaint gives you full 360-degree authentic human performance that holds up in professional cinematography.

Sora 2: Why not use that?

Sora 2's "cameos" feature is fun for social media—putting yourself into generated scenes with your voice and likeness. But it's generating a synthetic performance, not capturing your actual acting. What Sora 2 does: Takes your identity reference and applies it to a GenAI character performing GenAI actions. It's not capturing your performance choices, timing, or emotional nuance. The character moves according to what the AI thinks looks right, not what you as an actor chose to do. What Greasepaint does: Captures your real performance—your acting choices, your timing, your emotional delivery—and transfers it into the scene. You're actually performing, not having a digital puppet approximated from your face. Sora 2 results also aren't true to actor identity from all angles and don't offer professional-level control. It's a creative tool for experimentation, not a production tool for professional filmmaking.

Why not Google Veo 3.1 and HeyGen?

Veo 3.1 is an excellent video generation model, and HeyGen makes good use of it for avatar-based content. However, HeyGen avatars are designed to be text-driven digital spokespeople—robots reading scripts. The fundamental difference: HeyGen avatars aren't trained on real actors, so they can't replicate an individual performer's nuance. They're optimized for to-camera, talking-head work (corporate training, presentations, marketing videos), not free 360-degree scene work with camera movement and blocking. The performance gap: The actor isn't acting—they're being simulated. There's no human performance transfer, no emotional authenticity, no control over acting choices. Greasepaint transfers real actors' human performances into scenes, preserving the craft of acting. We're built for cinematic production where the performance needs to hold up at any angle, in any lighting, next to real footage.

Why do I need Greasepaint?

Consumer AI tools are amazing for social content, quick ideas, and experimentation. But there's a difference between "good enough for Instagram" and "holds up in professional production." The quality gap: Consumer tools use generic face approximations and simplified motion capture. They're optimized for speed and accessibility, not frame-by-frame performance fidelity. When you cut that footage next to real actors or screen it on a large format, the lack of authentic performance becomes obvious. What professionals need: Greasepaint is built for production environments where the digital performance needs to be invisible—where audiences should believe they're watching a real person, not an AI effect. We capture the actual nuance of individual performers and output in professional formats (HDR, 16-bit EXR at 24fps, expanding to higher frame rates) that integrate into real post-production pipelines.

Is Greasepaint ethical AI?

Moonvalley deserves credit for training on licensed data rather than scraped content. But there's a critical difference between "licensed footage" and "consented performers." How Moonvalley works: They license footage from producers and studios—the rights holders own the footage. But the humans performing in that footage, whose faces and performances trained the model, were never asked if their work could be used this way. The actors don't earn revenue from the AI model they helped create. How Greasepaint works: We don't train on footage of other actors. Each actor is our partner. They consent to having their specific identity and performance style captured, and they earn revenue when their digital likeness is used. We're not exploiting talent—we're creating a new revenue stream for performers. Rights protection: Unlike any other platform, rights are tied to the actual actor performing. We use DRM-embedded watermarks (hidden pixel modifications) tied to each face, proving their likeness was consented and tracking usage of the digital double. Actors maintain control.

Ray 3 is in 4K. Does Greasepaint match?

Yes. Greasepaint outputs in 4K with HDR and 16-bit EXR at 24fps (with higher frame rates coming soon). We're built for professional pipelines. Where we differ from Ray 3: Ray 3 is a generative AI video model—it creates synthetic scenes from prompts and reasoning. It's excellent for ideation, concept exploration, and generated environments. But it's not capturing real actor performances with frame-accurate emotional transfer. Greasepaint's focus: We're preserving authentic human performance from real actors and making it production-ready. Our outputs sit invisibly next to real footage because they're based on actual human choices, not AI-generated approximations. Ray 3 generates. Greasepaint captures, protects, and deploys real performances. Both tools can coexist in a pipeline—Ray 3 for environments and concept work, Greasepaint for authentic human performance.

What's your actual workflow and cost compared to competitors?

Scanning: Greasepaint's scanning cost is significantly higher than competitors. We invest in high-quality actor assets using our LightCage system or mobile app to create true-to-actor digital identities for professional work. This isn't about creating quick approximations—it's about building assets that hold up in production. Performance capture: Performers act on iPhone or standard camera. They can see their scene partner on screen (ensuring natural interaction), but don't see their digital double performing live. Currently supports one scene partner on platform; multiple partners available in pro captures (expanding soon). Post-capture control: Eye gaze can be adjusted for accurate eyelines Performance fine-tuning available (blend mode to nudge performance toward specific emotions like angry/happy/sad/neutral) Action editor allows cutting together performance pieces with prompted stunts on a timeline Pro users can export FBX models for pose adjustment, placing objects in hands, motion editing, and re-import Rendering: Outputs 24fps (higher frame rates coming), HDR, 16-bit EXR Philosophy: Unlike competitors built for speed and viral content, we're designed for professional film production where fans expect to see the actual person they're watching—not AI slop approximation. This is meant to sit invisibly next to real footage, not be an AI gimmick.

FAQ'S

Frequently Asked Questions

Tech differentiator: We preserve individual performer nuance. AI can replicate a face, but not a performance

Why can't I just use Runway Act Two?

The landmark vs. muscle movement issue: Runway Act Two tracks facial landmarks—key points that show gross movement like "mouth open" or "eyebrow raised." They capture the skeleton of performance. Greasepaint traces a full 3D model of the face, capturing every muscle movement between those landmarks. We get the subtle tightening around the eyes when hiding pain, the micro-tension in the jaw when holding back anger—the acting details that make performance feel real. The angle problem: Act Two uses a single reference image, so it approximates the actor's face. When the face turns or changes angle shot-to-shot, consistency breaks down. Even the mouth and teeth are wrong because they're guessed at, not the actor's actual features. Greasepaint creates a true 3D asset from the actor, so every angle is accurate and consistent. Performance fidelity: Act Two is great for motion capture, but all shots need to match the frontal video reference for accurate transfer. Greasepaint works in full 360-degree scene work with complete camera freedom.

What about Wan Animate?

Wan Animate is excellent for character animation and replacement, especially when you need to match environmental lighting. However, it has a critical technical limitation: it only works on frontal faces. When the angle changes, the animation breaks down and identity is lost. This makes it unusable for dynamic cinematography or any scene requiring camera movement and varied angles. Additional limitations: Wan Animate is trained on generic performance data, not individual actors. It can replicate what a performance looks like, but not how a specific performer expresses emotion. The Greasepaint difference: We create a true 3D asset of the actor that maintains identity and performance fidelity at any angle—profile, three-quarter, overhead, any camera position. Our Expression-Lock™ technology is trained exclusively on each individual actor, capturing how you specifically raise an eyebrow when suspicious, how you express heartbreak. Wan Animate gives you frontal-angle animation. Greasepaint gives you full 360-degree authentic human performance that holds up in professional cinematography.

Sora 2: Why not use that?

Sora 2's "cameos" feature is fun for social media—putting yourself into generated scenes with your voice and likeness. But it's generating a synthetic performance, not capturing your actual acting. What Sora 2 does: Takes your identity reference and applies it to a GenAI character performing GenAI actions. It's not capturing your performance choices, timing, or emotional nuance. The character moves according to what the AI thinks looks right, not what you as an actor chose to do. What Greasepaint does: Captures your real performance—your acting choices, your timing, your emotional delivery—and transfers it into the scene. You're actually performing, not having a digital puppet approximated from your face. Sora 2 results also aren't true to actor identity from all angles and don't offer professional-level control. It's a creative tool for experimentation, not a production tool for professional filmmaking.

Why not Google Veo 3.1 and HeyGen?

Veo 3.1 is an excellent video generation model, and HeyGen makes good use of it for avatar-based content. However, HeyGen avatars are designed to be text-driven digital spokespeople—robots reading scripts. The fundamental difference: HeyGen avatars aren't trained on real actors, so they can't replicate an individual performer's nuance. They're optimized for to-camera, talking-head work (corporate training, presentations, marketing videos), not free 360-degree scene work with camera movement and blocking. The performance gap: The actor isn't acting—they're being simulated. There's no human performance transfer, no emotional authenticity, no control over acting choices. Greasepaint transfers real actors' human performances into scenes, preserving the craft of acting. We're built for cinematic production where the performance needs to hold up at any angle, in any lighting, next to real footage.

Why do I need Greasepaint?

Consumer AI tools are amazing for social content, quick ideas, and experimentation. But there's a difference between "good enough for Instagram" and "holds up in professional production." The quality gap: Consumer tools use generic face approximations and simplified motion capture. They're optimized for speed and accessibility, not frame-by-frame performance fidelity. When you cut that footage next to real actors or screen it on a large format, the lack of authentic performance becomes obvious. What professionals need: Greasepaint is built for production environments where the digital performance needs to be invisible—where audiences should believe they're watching a real person, not an AI effect. We capture the actual nuance of individual performers and output in professional formats (HDR, 16-bit EXR at 24fps, expanding to higher frame rates) that integrate into real post-production pipelines.

Is Greasepaint ethical AI?

Moonvalley deserves credit for training on licensed data rather than scraped content. But there's a critical difference between "licensed footage" and "consented performers." How Moonvalley works: They license footage from producers and studios—the rights holders own the footage. But the humans performing in that footage, whose faces and performances trained the model, were never asked if their work could be used this way. The actors don't earn revenue from the AI model they helped create. How Greasepaint works: We don't train on footage of other actors. Each actor is our partner. They consent to having their specific identity and performance style captured, and they earn revenue when their digital likeness is used. We're not exploiting talent—we're creating a new revenue stream for performers. Rights protection: Unlike any other platform, rights are tied to the actual actor performing. We use DRM-embedded watermarks (hidden pixel modifications) tied to each face, proving their likeness was consented and tracking usage of the digital double. Actors maintain control.

Ray 3 is in 4K. Does Greasepaint match?

Yes. Greasepaint outputs in 4K with HDR and 16-bit EXR at 24fps (with higher frame rates coming soon). We're built for professional pipelines. Where we differ from Ray 3: Ray 3 is a generative AI video model—it creates synthetic scenes from prompts and reasoning. It's excellent for ideation, concept exploration, and generated environments. But it's not capturing real actor performances with frame-accurate emotional transfer. Greasepaint's focus: We're preserving authentic human performance from real actors and making it production-ready. Our outputs sit invisibly next to real footage because they're based on actual human choices, not AI-generated approximations. Ray 3 generates. Greasepaint captures, protects, and deploys real performances. Both tools can coexist in a pipeline—Ray 3 for environments and concept work, Greasepaint for authentic human performance.

What's your actual workflow and cost compared to competitors?

Scanning: Greasepaint's scanning cost is significantly higher than competitors. We invest in high-quality actor assets using our LightCage system or mobile app to create true-to-actor digital identities for professional work. This isn't about creating quick approximations—it's about building assets that hold up in production. Performance capture: Performers act on iPhone or standard camera. They can see their scene partner on screen (ensuring natural interaction), but don't see their digital double performing live. Currently supports one scene partner on platform; multiple partners available in pro captures (expanding soon). Post-capture control: Eye gaze can be adjusted for accurate eyelines Performance fine-tuning available (blend mode to nudge performance toward specific emotions like angry/happy/sad/neutral) Action editor allows cutting together performance pieces with prompted stunts on a timeline Pro users can export FBX models for pose adjustment, placing objects in hands, motion editing, and re-import Rendering: Outputs 24fps (higher frame rates coming), HDR, 16-bit EXR Philosophy: Unlike competitors built for speed and viral content, we're designed for professional film production where fans expect to see the actual person they're watching—not AI slop approximation. This is meant to sit invisibly next to real footage, not be an AI gimmick.

FAQ'S

Frequently Asked Questions

Tech differentiator: We preserve individual performer nuance. AI can replicate a face, but not a performance

Why can't I just use Runway Act Two?

What about Wan Animate?

Sora 2: Why not use that?

Why not Google Veo 3.1 and HeyGen?

Why do I need Greasepaint?

Is Greasepaint ethical AI?

Moonvalley deserves credit for training on licensed data rather than scraped content. But there's a critical difference between "licensed footage" and "consented performers." How Moonvalley works: They license footage from producers and studios—the rights holders own the footage. But the humans performing in that footage, whose faces and performances trained the model, were never asked if their work could be used this way. The actors don't earn revenue from the AI model they helped create. How Greasepaint works: We don't train on footage of other actors. Each actor is our partner. They consent to having their specific identity and performance style captured, and they earn revenue when their digital likeness is used. We're not exploiting talent—we're creating a new revenue stream for performers. Rights protection: Unlike any other platform, rights are tied to the actual actor performing. We use DRM-embedded watermarks (hidden pixel modifications) tied to each face, proving their likeness was consented and tracking usage of the digital double. Actors maintain control.

Ray 3 is in 4K. Does Greasepaint match?

Yes. Greasepaint outputs in 4K with HDR and 16-bit EXR at 24fps (with higher frame rates coming soon). We're built for professional pipelines. Where we differ from Ray 3: Ray 3 is a generative AI video model—it creates synthetic scenes from prompts and reasoning. It's excellent for ideation, concept exploration, and generated environments. But it's not capturing real actor performances with frame-accurate emotional transfer. Greasepaint's focus: We're preserving authentic human performance from real actors and making it production-ready. Our outputs sit invisibly next to real footage because they're based on actual human choices, not AI-generated approximations. Ray 3 generates. Greasepaint captures, protects, and deploys real performances. Both tools can coexist in a pipeline—Ray 3 for environments and concept work, Greasepaint for authentic human performance.

What's your actual workflow and cost compared to competitors?

Scanning: Greasepaint's scanning cost is significantly higher than competitors. We invest in high-quality actor assets using our LightCage system or mobile app to create true-to-actor digital identities for professional work. This isn't about creating quick approximations—it's about building assets that hold up in production. Performance capture: Performers act on iPhone or standard camera. They can see their scene partner on screen (ensuring natural interaction), but don't see their digital double performing live. Currently supports one scene partner on platform; multiple partners available in pro captures (expanding soon). Post-capture control: Eye gaze can be adjusted for accurate eyelines Performance fine-tuning available (blend mode to nudge performance toward specific emotions like angry/happy/sad/neutral) Action editor allows cutting together performance pieces with prompted stunts on a timeline Pro users can export FBX models for pose adjustment, placing objects in hands, motion editing, and re-import Rendering: Outputs 24fps (higher frame rates coming), HDR, 16-bit EXR Philosophy: Unlike competitors built for speed and viral content, we're designed for professional film production where fans expect to see the actual person they're watching—not AI slop approximation. This is meant to sit invisibly next to real footage, not be an AI gimmick.

Greasepaint Technical Overview

Tech differentiator: We preserve individual performer nuance. AI can replicate a face, but not a performance.


What is Greasepaint

How it works

Core Technology

Why it Matter

Business Model

Current Status

What is Greasepaint

A platform that captures, stores, and licenses authentic human performers for use in digital content. Performers create a high-fidelity digital identity that their own acting can animate, captured remotely with only a single camera, or professionally using a highly accurate multi-camera rig. This is the platform for placing real actors, artists, and influencers into any AI-generated content with photoreal precision and performance authenticity. 

How it Works

Capture

  • Performers are scanned in-studio using our LightCage system or mobile via a self-scan app or a remote scan rig.

  • Scan captures look, performance training, and movement data

  • This data trains:

    •  a 3D model based on SIMPL that not only understands the actors look, but how they move

    • A face identity model that learns their identity and emotional performance and is adaptable to emotion in a scene

    • A series of custom LoRas that understand their general look 

  • Creates a consented digital identity with embedded rights and ownership

  • Stored in AI-safe performance vault

Performance

  • The performer acts at home using an iPhone or in a studio using a multi-camera setup for larger action

  • System tracks facial performance and full body movement

  • Additional action (stunts, runs, jumps) can be prompted and added, and blended on a timeline

Deployment

  • 3D performance placed in any scene with any camera angles, or exported to another 3D program for editing

  • Rendered with AI using reference for styling (makeup, hair, wardrobe, environment)

  • AI rendering uses 3D geometry-guided rendering to convert the nuance of the 3D mesh to the diffuser rendered asset

  • Can render full environment or ‘performance as an asset’ for traditional compositing

Core Technology

Expression-Lock™

  • Proprietary ML model that learns individual performer’s emotional expressions

  • Trained on a specific performer, not generic data

  • Preserves how that person uniquely expresses emotion

Geometry-Guided Performance Transfer

  • Traces a full 3D model of the face from capture footage

  • Captures every muscle movement between facial landmarks

  • Competitors track landmarks only (gross movement: mouth open, eyebrow raised)

  • Greasepaint captures in-between muscle movement (subtle tension, micro-expressions)

Technical Difference:

  • Landmark systems: skeleton of performance

  • Greasepaint: complete muscle data that makes performance feel authentic

  • Expression-Lock™ face only behaves like the actor. It won’t become a 3rd person

  • 360-degree freedom of movement. Not locked to frontal angles.

  • No identity bleed - performer always looks like themselves from all angles

Why it Matters

For Performers:

  • Work remotely from any location

  • Dangerous stunts done safely

  • Control and royalties on all uses

  • Performance integrity preserved

For Producers/Investors:

  • Legal, consented, licensed human performance

  • No rights clearance risk

  • Scalable without sacrificing authenticity

  • Marketplace model with recurring revenue

Business Model

  • Scanning Services: Initial capture and onboarding

  • Subscriptions: Platform access for studios/agencies/creators

  • Marketplace Commission: 25% on performance licensesptions: Platform access for studios/agencies/creators

  • Marketplace Commission: 25% on performance licenses

Current Status

  • $1.8M production revenue delivered

  • Used by Marvel, 20th Century, Amazon, Hulu, Peacock

  • Active pipeline with Disney, Lionsgate, Annapurna

  • Featured in Amazon Prime's "Étoile"

Market Position

Greasepaint is infrastructure for authenticated human performance at scale. Not synthetic avatar generation—real actor performances with legal certainty and emotional authenticity.

8. Changes to This Privacy Policy

We may update this Privacy Policy periodically. We will notify you of any changes by posting the new Privacy Policy on this page. Your continued use of the website after such modifications will constitute your acknowledgment of the modified Privacy Policy.


If you have any questions, please contact us at hello@greasepaint.ai