How to Use Runway Gen-4 Motion Brush for Precise Camera and Subject Movement Control

Mastering Runway Gen-4 Motion Brush for AI Video Generation

Runway Gen-4 introduces an advanced Motion Brush tool that gives creators granular control over how subjects move and how the camera behaves in AI-generated videos. Unlike simple text-to-video prompts, the Motion Brush lets you paint movement directly onto specific regions of your frame, unlocking cinematic precision that was previously impossible with generative AI. This guide walks you through the complete workflow—from setup to export—so you can produce professional-quality AI videos with intentional, controlled motion.

Step 1: Set Up Your Runway Environment

Before using the Motion Brush, ensure you have the right account tier and API access configured.

  • Create or upgrade your account at app.runwayml.com. Motion Brush is available on the Standard plan and above.- Install the Runway Python SDK for programmatic workflows:pip install runwayml- Authenticate your environment:
    import runwayml

client = runwayml.RunwayML(api_key=“YOUR_API_KEY”)- Verify your credits and quota:

account = client.accounts.retrieve()
print(f”Credits remaining: {account.credits}“)

Step 2: Upload Your Source Image or Frame

The Motion Brush works on a reference image that serves as your first frame. For best results, use a high-resolution image (at least 1280×768) with clearly defined subjects. # Upload a source image for video generation image_upload = client.assets.create( file="./scene_reference.png", name="motion-brush-source" ) print(f"Asset ID: {image_upload.id}") ## Step 3: Define Motion Brush Regions

The core of the Motion Brush is region-based motion assignment. You paint masks over parts of your image, then assign directional vectors and intensity values to each region independently.

Key Motion Brush Parameters

ParameterTypeRangeDescription
directionVector (x, y)-1.0 to 1.0Movement direction for the painted region
speedFloat0.0 to 10.0Velocity of the motion within the region
ambient_strengthFloat0.0 to 1.0Organic micro-motion in unpainted areas
proximity_weightFloat0.0 to 1.0How sharply motion falls off at mask edges
camera_motionPreset StringSee list belowGlobal camera behavior for the entire clip
### Camera Motion Presets - pan_left, pan_right — Horizontal camera sweep- tilt_up, tilt_down — Vertical camera angle shift- zoom_in, zoom_out — Focal length simulation- orbit_cw, orbit_ccw — Circular movement around subject- dolly_in, dolly_out — Physical camera translation forward/backward- static — Locked camera, subject-only motion ## Step 4: Generate Video with Motion Brush via API Combine your source image, brush regions, and camera motion into a single generation call. task = client.image_to_video.create( model="gen4", image_asset_id=image_upload.id, duration=5, motion_brush={ "regions": [ { "mask": "subject_upper_body", "direction": [0.3, -0.1], "speed": 2.5, "proximity_weight": 0.7 }, { "mask": "background_clouds", "direction": [-0.5, 0.0], "speed": 1.0, "proximity_weight": 0.3 } ], "ambient_strength": 0.15, "camera_motion": "dolly_in" }, prompt="cinematic slow motion, golden hour lighting" ) print(f"Task ID: {task.id}") ## Step 5: Poll for Results and Download
import time

while True: status = client.tasks.retrieve(task.id) if status.status == “SUCCEEDED”: print(f”Video URL: {status.output[0]}”) break elif status.status == “FAILED”: print(f”Error: {status.failure_reason}”) break time.sleep(10)

Step 6: Layer Multiple Motion Passes (Advanced)

For complex scenes, generate separate motion passes and composite them. Paint your foreground subject with fast lateral motion while keeping the background on a slow drift, then apply a counter-directional camera pan to create parallax depth. # Foreground-focused pass fg_task = client.image_to_video.create( model="gen4", image_asset_id=image_upload.id, duration=5, motion_brush={ "regions": [ {"mask": "character", "direction": [0.8, 0.0], "speed": 4.0, "proximity_weight": 0.9} ], "ambient_strength": 0.05, "camera_motion": "pan_right" }, prompt="dynamic action sequence, shallow depth of field" ) ## Pro Tips for Power Users - **Combine opposing vectors**: Paint a subject moving right while setting the camera to pan_left. This creates dramatic speed illusion without cranking the speed parameter, which can introduce artifacts.- **Use low ambient strength**: Keep ambient_strength between 0.05 and 0.2 for professional results. Higher values cause jelly-like warping in static areas.- **Leverage proximity weight**: A value of 0.8–1.0 gives hard-edge motion isolation (ideal for a person walking). Values below 0.4 feather the motion outward, great for flowing fabrics or smoke.- **Prompt synergy**: Your text prompt should reinforce—not contradict—the brush vectors. If the brush pushes a subject left, avoid prompting "walking to the right."- **Frame rate control**: Request 24fps for cinematic feel or 30fps for smoother web content. Higher frame rates consume more credits per second of output.- **Iterate with short clips**: Test with 2-second generations before committing to full 10-second renders. This saves credits and accelerates creative iteration. ## Troubleshooting Common Issues

ProblemCauseSolution
Subject morphs or deformsSpeed value too high for the region sizeReduce speed to below 3.0 and increase proximity_weight
Background warps unnaturallyHigh ambient_strength with conflicting brush regionsLower ambient to 0.1 and ensure no overlapping mask regions
Camera motion overrides brushStrong camera preset competing with subtle brush vectorsUse static camera when brush precision matters most
Generation fails with timeoutSource image too large or complex sceneResize source to 1280×768 and reduce region count to 3 or fewer
API returns 429 rate limitToo many concurrent generation requestsImplement exponential backoff: time.sleep(2 ** retry_count)
Motion looks jitteryConflicting direction vectors in adjacent regionsEnsure neighboring regions share similar directional tendencies at their borders
## FAQ

Can I use Motion Brush with text-to-video or only image-to-video?

Motion Brush in Gen-4 is designed for image-to-video workflows. You need a source image to paint brush regions onto. For text-to-video, you can first generate a still frame from a text prompt, then use that frame as your Motion Brush source image for a two-step workflow.

How many motion brush regions can I define in a single generation?

Gen-4 supports up to 5 independent motion brush regions per generation. Each region can have its own direction, speed, and proximity weight. For scenes requiring more complexity, use the layered pass approach—generate separate clips with different region configurations and composite them in post-production.

Does the Motion Brush work with Gen-4 Turbo or only the standard model?

Motion Brush is available on both the standard Gen-4 model and Gen-4 Turbo. The Turbo variant processes faster but may produce slightly less nuanced motion at extreme speed values. For maximum fidelity on critical shots, use the standard Gen-4 model with the model=“gen4” parameter.

Explore More Tools

Grok Best Practices for Academic Research and Literature Discovery: Leveraging X/Twitter for Scholarly Intelligence Best Practices Grok Best Practices for Content Strategy: Identify Trending Topics Before They Peak and Create Content That Captures Demand Best Practices Grok Case Study: How a DTC Beauty Brand Used Real-Time Social Listening to Save Their Product Launch Case Study Grok Case Study: How a Pharma Company Tracked Patient Sentiment During a Drug Launch and Caught a Safety Signal 48 Hours Before the FDA Case Study Grok Case Study: How a Disaster Relief Nonprofit Used Real-Time X/Twitter Monitoring to Coordinate Emergency Response 3x Faster Case Study Grok Case Study: How a Political Campaign Used X/Twitter Sentiment Analysis to Reshape Messaging and Win a Swing District Case Study How to Use Grok for Competitive Intelligence: Track Product Launches, Pricing Changes, and Market Positioning in Real Time How-To Grok vs Perplexity vs ChatGPT Search for Real-Time Information: Which AI Search Tool Is Most Accurate in 2026? Comparison How to Use Grok for Crisis Communication Monitoring: Detect, Assess, and Respond to PR Emergencies in Real Time How-To How to Use Grok for Product Improvement: Extract Customer Feedback Signals from X/Twitter That Your Support Team Misses How-To How to Use Grok for Conference Live Monitoring: Extract Event Insights and Identify Networking Opportunities in Real Time How-To How to Use Grok for Influencer Marketing: Discover, Vet, and Track Influencer Partnerships Using Real X/Twitter Data How-To How to Use Grok for Job Market Analysis: Track Industry Hiring Trends, Layoff Signals, and Salary Discussions on X/Twitter How-To How to Use Grok for Investor Relations: Track Earnings Sentiment, Analyst Reactions, and Shareholder Concerns in Real Time How-To How to Use Grok for Recruitment and Talent Intelligence: Identifying Hiring Signals from X/Twitter Data How-To How to Use Grok for Startup Fundraising Intelligence: Track Investor Sentiment, VC Activity, and Funding Trends on X/Twitter How-To How to Use Grok for Regulatory Compliance Monitoring: Real-Time Policy Tracking Across Industries How-To NotebookLM Best Practices for Financial Analysts: Due Diligence, Investment Research & Risk Factor Analysis Across SEC Filings Best Practices NotebookLM Best Practices for Teachers: Build Curriculum-Aligned Lesson Plans, Study Guides, and Assessment Materials from Your Own Resources Best Practices NotebookLM Case Study: How an Insurance Company Built a Claims Processing Training System That Cut Errors by 35% Case Study