How to Use Runway Gen-4 Motion Brush for Precise Camera and Subject Movement Control
Mastering Runway Gen-4 Motion Brush for AI Video Generation
Runway Gen-4 introduces an advanced Motion Brush tool that gives creators granular control over how subjects move and how the camera behaves in AI-generated videos. Unlike simple text-to-video prompts, the Motion Brush lets you paint movement directly onto specific regions of your frame, unlocking cinematic precision that was previously impossible with generative AI. This guide walks you through the complete workflow—from setup to export—so you can produce professional-quality AI videos with intentional, controlled motion.
Step 1: Set Up Your Runway Environment
Before using the Motion Brush, ensure you have the right account tier and API access configured.
- Create or upgrade your account at
app.runwayml.com. Motion Brush is available on the Standard plan and above.- Install the Runway Python SDK for programmatic workflows:pip install runwayml- Authenticate your environment:import runwayml
client = runwayml.RunwayML(api_key=“YOUR_API_KEY”)- Verify your credits and quota:
account = client.accounts.retrieve()
print(f”Credits remaining: {account.credits}“)
Step 2: Upload Your Source Image or Frame
The Motion Brush works on a reference image that serves as your first frame. For best results, use a high-resolution image (at least 1280×768) with clearly defined subjects.
# Upload a source image for video generation
image_upload = client.assets.create(
file="./scene_reference.png",
name="motion-brush-source"
)
print(f"Asset ID: {image_upload.id}")
## Step 3: Define Motion Brush Regions
The core of the Motion Brush is region-based motion assignment. You paint masks over parts of your image, then assign directional vectors and intensity values to each region independently.
Key Motion Brush Parameters
| Parameter | Type | Range | Description |
|---|---|---|---|
direction | Vector (x, y) | -1.0 to 1.0 | Movement direction for the painted region |
speed | Float | 0.0 to 10.0 | Velocity of the motion within the region |
ambient_strength | Float | 0.0 to 1.0 | Organic micro-motion in unpainted areas |
proximity_weight | Float | 0.0 to 1.0 | How sharply motion falls off at mask edges |
camera_motion | Preset String | See list below | Global camera behavior for the entire clip |
pan_left, pan_right — Horizontal camera sweep- tilt_up, tilt_down — Vertical camera angle shift- zoom_in, zoom_out — Focal length simulation- orbit_cw, orbit_ccw — Circular movement around subject- dolly_in, dolly_out — Physical camera translation forward/backward- static — Locked camera, subject-only motion
## Step 4: Generate Video with Motion Brush via API
Combine your source image, brush regions, and camera motion into a single generation call.
task = client.image_to_video.create(
model="gen4",
image_asset_id=image_upload.id,
duration=5,
motion_brush={
"regions": [
{
"mask": "subject_upper_body",
"direction": [0.3, -0.1],
"speed": 2.5,
"proximity_weight": 0.7
},
{
"mask": "background_clouds",
"direction": [-0.5, 0.0],
"speed": 1.0,
"proximity_weight": 0.3
}
],
"ambient_strength": 0.15,
"camera_motion": "dolly_in"
},
prompt="cinematic slow motion, golden hour lighting"
)
print(f"Task ID: {task.id}")
## Step 5: Poll for Results and Download
import time
while True: status = client.tasks.retrieve(task.id) if status.status == “SUCCEEDED”: print(f”Video URL: {status.output[0]}”) break elif status.status == “FAILED”: print(f”Error: {status.failure_reason}”) break time.sleep(10)
Step 6: Layer Multiple Motion Passes (Advanced)
For complex scenes, generate separate motion passes and composite them. Paint your foreground subject with fast lateral motion while keeping the background on a slow drift, then apply a counter-directional camera pan to create parallax depth.
# Foreground-focused pass
fg_task = client.image_to_video.create(
model="gen4",
image_asset_id=image_upload.id,
duration=5,
motion_brush={
"regions": [
{"mask": "character", "direction": [0.8, 0.0], "speed": 4.0, "proximity_weight": 0.9}
],
"ambient_strength": 0.05,
"camera_motion": "pan_right"
},
prompt="dynamic action sequence, shallow depth of field"
)
## Pro Tips for Power Users
- **Combine opposing vectors**: Paint a subject moving right while setting the camera to pan_left. This creates dramatic speed illusion without cranking the speed parameter, which can introduce artifacts.- **Use low ambient strength**: Keep ambient_strength between 0.05 and 0.2 for professional results. Higher values cause jelly-like warping in static areas.- **Leverage proximity weight**: A value of 0.8–1.0 gives hard-edge motion isolation (ideal for a person walking). Values below 0.4 feather the motion outward, great for flowing fabrics or smoke.- **Prompt synergy**: Your text prompt should reinforce—not contradict—the brush vectors. If the brush pushes a subject left, avoid prompting "walking to the right."- **Frame rate control**: Request 24fps for cinematic feel or 30fps for smoother web content. Higher frame rates consume more credits per second of output.- **Iterate with short clips**: Test with 2-second generations before committing to full 10-second renders. This saves credits and accelerates creative iteration.
## Troubleshooting Common Issues
| Problem | Cause | Solution |
|---|---|---|
| Subject morphs or deforms | Speed value too high for the region size | Reduce speed to below 3.0 and increase proximity_weight |
| Background warps unnaturally | High ambient_strength with conflicting brush regions | Lower ambient to 0.1 and ensure no overlapping mask regions |
| Camera motion overrides brush | Strong camera preset competing with subtle brush vectors | Use static camera when brush precision matters most |
| Generation fails with timeout | Source image too large or complex scene | Resize source to 1280×768 and reduce region count to 3 or fewer |
| API returns 429 rate limit | Too many concurrent generation requests | Implement exponential backoff: time.sleep(2 ** retry_count) |
| Motion looks jittery | Conflicting direction vectors in adjacent regions | Ensure neighboring regions share similar directional tendencies at their borders |
Can I use Motion Brush with text-to-video or only image-to-video?
Motion Brush in Gen-4 is designed for image-to-video workflows. You need a source image to paint brush regions onto. For text-to-video, you can first generate a still frame from a text prompt, then use that frame as your Motion Brush source image for a two-step workflow.
How many motion brush regions can I define in a single generation?
Gen-4 supports up to 5 independent motion brush regions per generation. Each region can have its own direction, speed, and proximity weight. For scenes requiring more complexity, use the layered pass approach—generate separate clips with different region configurations and composite them in post-production.
Does the Motion Brush work with Gen-4 Turbo or only the standard model?
Motion Brush is available on both the standard Gen-4 model and Gen-4 Turbo. The Turbo variant processes faster but may produce slightly less nuanced motion at extreme speed values. For maximum fidelity on critical shots, use the standard Gen-4 model with the model=“gen4” parameter.