Midjourney Style Reference (--sref) Complete Guide: Create Consistent Brand Visuals

What Is Style Reference and Why It Changes Everything

Before —sref, getting consistent visuals from Midjourney was an exercise in prompt archaeology. You would spend hours crafting the perfect prompt — specifying exact color palettes, lighting conditions, texture descriptions, and artistic influences — only to get inconsistent results across different subjects. Generating a series of illustrations for a brand meant fighting the model on every single generation.

The —sref (style reference) parameter solves this by letting you point Midjourney at an image and say “make everything look like this.” Instead of describing a style in words, you show it visually. The model extracts the visual DNA — color palette, lighting direction, texture quality, composition tendencies, level of detail — and applies it to whatever subject you describe.

This is transformative for three use cases: brand asset creation (every marketing image shares the same visual language), editorial illustration (a series of article illustrations that feel cohesive), and product visualization (consistent product shots across different angles and contexts).

The parameter works with Midjourney v5.2 and later, with the most refined behavior in v6 and v6.1. It accepts image URLs, uploaded Discord images, and even randomly generated style codes.

How —sref Works: The Mechanics

Basic Syntax

The —sref parameter goes at the end of your prompt, after the subject description:

/imagine a minimalist coffee shop interior with morning light --sref https://example.com/style-image.jpg

Midjourney analyzes the reference image for:

  • Color palette: dominant and accent colors, saturation levels, color temperature
  • Lighting: direction, quality (soft vs. hard), contrast ratio
  • Texture: surface quality, grain, smoothness
  • Composition tendencies: depth of field, perspective, framing style
  • Rendering style: photorealistic, illustrated, painterly, 3D, flat

It then applies these characteristics to your subject description while maintaining the subject’s core identity.

Using Uploaded Images vs. URLs

Discord upload method:

  1. Upload your style reference image to any Discord channel
  2. Right-click the image and select “Copy Link”
  3. Paste the link after —sref in your prompt

Direct URL method: Use any publicly accessible image URL. The image must be directly accessible (not behind authentication or inside a gallery page).

/imagine a golden retriever sitting in a garden --sref https://cdn.example.com/monet-style.jpg

Random Style Codes

Midjourney also supports random style codes — numeric identifiers for predefined aesthetic directions:

/imagine a mountain landscape --sref random

This generates an image with a randomly assigned style code. The code appears in the generation metadata. You can then reuse it:

/imagine a coastal village --sref 12345678

Random codes are useful for discovering unexpected aesthetics, but they are less controllable than providing your own reference images.

Controlling Style Intensity with —sw

The —sw (style weight) parameter controls how strongly the style reference influences the output. It ranges from 0 to 1000, with 100 as the default.

Style Weight Scale

--sw ValueBehaviorBest For
0-25Subtle hint of the style; subject takes priorityWhen you want a light touch — just the color palette
25-100Balanced blend of style and subject accuracyMost production work; default range
100-300Strong style influence; subject may shift slightlyWhen visual consistency matters more than subject precision
300-1000Style dominates; subject becomes secondaryAbstract work, texture studies, extreme stylization

Practical Examples

Minimal style influence (color palette only):

/imagine a professional headshot of a business executive --sref https://example.com/warm-tones.jpg --sw 20

Balanced (default):

/imagine a professional headshot of a business executive --sref https://example.com/warm-tones.jpg --sw 100

Heavy style transfer:

/imagine a professional headshot of a business executive --sref https://example.com/oil-painting.jpg --sw 500

Start at —sw 100 and adjust based on results. Increase if the output does not capture enough of the reference style. Decrease if the subject is distorted.

Combining Multiple Style References

You can use multiple —sref images to blend styles. This is powerful for creating unique aesthetics that do not exist in any single reference.

Equal Weight Blending

/imagine a futuristic city skyline at dusk --sref https://example.com/cyberpunk.jpg https://example.com/watercolor.jpg

Both references contribute equally. The result blends cyberpunk elements with watercolor texture.

Weighted Blending

Use the :: syntax to assign weights to each reference:

/imagine a futuristic city skyline at dusk --sref https://example.com/cyberpunk.jpg::3 https://example.com/watercolor.jpg::1

This gives the cyberpunk reference three times the influence of the watercolor reference. The numbers are relative — 3:1 is the same as 75:25.

Practical Multi-Reference Combinations

Brand color palette + artistic style:

/imagine product packaging mockup --sref https://example.com/brand-colors.jpg::2 https://example.com/vintage-print.jpg::1

Lighting reference + texture reference:

/imagine interior design living room --sref https://example.com/golden-hour-light.jpg::2 https://example.com/concrete-minimalism.jpg::1

Photography style + color grading:

/imagine street photography tokyo night --sref https://example.com/film-noir.jpg::1 https://example.com/neon-color-grade.jpg::2

Building a Brand Style Library

For professional use, build a systematic library of style references that you can reuse across projects.

Step 1: Define Your Brand Visual DNA

Before collecting references, identify the components:

  • Primary color palette: 3-5 colors that define the brand
  • Lighting style: natural, studio, dramatic, flat
  • Texture quality: clean, grainy, textured, glossy
  • Rendering approach: photorealistic, illustrated, 3D, mixed media
  • Mood: corporate, playful, luxurious, technical

Step 2: Create Reference Images

The best style references are images that strongly exhibit the desired characteristics without competing subject matter. Create or curate:

  • Color swatch images: solid color compositions or gradient fields
  • Texture samples: close-up surfaces that represent the desired quality
  • Lighting examples: images where the lighting is the dominant feature
  • Style benchmarks: finished pieces that represent the target aesthetic

Step 3: Test and Validate

Run each reference through a standardized test prompt set:

/imagine a coffee cup on a wooden table --sref [your reference] --sw 100
/imagine a person walking down a city street --sref [your reference] --sw 100
/imagine an abstract geometric pattern --sref [your reference] --sw 100
/imagine a landscape with mountains and a lake --sref [your reference] --sw 100

If the style holds consistently across all four subjects, the reference is robust enough for production use.

Step 4: Document and Organize

Create a reference document (spreadsheet or Notion database) with:

  • Reference image thumbnail
  • URL or Discord link
  • Recommended —sw range
  • Best use cases
  • Known limitations
  • Sample outputs

Production Workflow: Editorial Illustration Series

Here is a real workflow for creating a cohesive set of blog illustrations:

1. Establish the Master Style

Generate one hero image that perfectly represents the desired aesthetic:

/imagine a wide establishing shot of a modern coworking space, warm natural light, people working at desks, architectural photography style --ar 16:9 --v 6.1

Upscale and refine until you have the perfect reference. Upload this as your master style reference.

2. Generate the Series

Use the master image as —sref for all subsequent illustrations:

/imagine a close-up of hands typing on a laptop keyboard --sref [master URL] --sw 150 --ar 16:9
/imagine two people having a meeting at a whiteboard --sref [master URL] --sw 150 --ar 16:9
/imagine a person standing at a window looking at the city skyline --sref [master URL] --sw 150 --ar 16:9
/imagine an overhead shot of a desk with coffee, notebook, and phone --sref [master URL] --sw 150 --ar 16:9

3. Fine-Tune Individual Images

If specific outputs drift from the style:

  • Increase —sw to 200-300 for stubborn subjects
  • Add brief style reinforcement in the prompt: “same warm natural light, architectural photography feel”
  • Use —no to exclude unwanted elements: —no illustration, cartoon, painting

4. Batch Processing Tips

  • Run all images in the same Discord channel thread for easy comparison
  • Generate 4 variations of each (default behavior) and pick the most consistent
  • Keep a “reject pile” — images that look good individually but break the series cohesion

Common Mistakes and How to Fix Them

Mistake 1: Using a Busy Reference Image

If your style reference has a strong subject (like a portrait of a person), Midjourney may try to incorporate that subject into the output. Fix: crop the reference to show only texture, color, or background elements.

Mistake 2: Setting —sw Too High

Values above 300 can distort the subject beyond recognition. Fix: start at 100 and increase in increments of 50 until you find the sweet spot.

Mistake 3: Conflicting Prompt and Style Reference

If your prompt says “dark moody nighttime” but your style reference is a bright sunny image, the results will be confused. Fix: align your prompt description with the general mood of the reference.

Mistake 4: Ignoring Aspect Ratio Interaction

Style references interact with aspect ratio. A reference shot in portrait orientation may produce different style transfer results when used with a —ar 16:9 prompt. Fix: create separate references for different aspect ratios if consistency is critical.

Mistake 5: Not Versioning Style References

Midjourney versions (v5, v6, v6.1) interpret —sref differently. A reference that works perfectly in v6 may produce different results in v6.1. Fix: document which version each reference was tested with and re-validate after version upgrades.

—sref vs. —cref: When to Use Which

Midjourney has two reference parameters that serve different purposes:

ParameterTransfersDoes Not TransferBest For
--sref (style reference)Color palette, lighting, texture, rendering style, moodSubject identity, specific objects, facesBrand consistency, editorial series, product shots
--cref (character reference)Character appearance, face, clothing, body typeArt style, lighting, color paletteConsistent characters across scenes

You can combine both:

/imagine a warrior standing on a cliff at sunset --sref [style URL] --cref [character URL]

This gives you a specific character rendered in a specific visual style — the holy grail for narrative illustration and game concept art.

Advanced Techniques

Style Interpolation

Generate a series with gradually changing —sw to create a style transition:

/imagine mountain landscape --sref [reference] --sw 50
/imagine mountain landscape --sref [reference] --sw 100
/imagine mountain landscape --sref [reference] --sw 200
/imagine mountain landscape --sref [reference] --sw 400

This shows how the style reference progressively overtakes the default rendering.

Style Code Discovery

Use the /describe command on images you admire to reverse-engineer prompt elements, then combine with —sref for compound styling:

/describe [upload an image you like]

Take the descriptive keywords and combine with a different —sref for hybrid results.

Cross-Model Style Transfer

Generate a base image in another tool (DALL-E, Stable Diffusion) with specific characteristics, then use that output as a —sref in Midjourney. This lets you leverage strengths of different models while maintaining Midjourney’s rendering quality.

Frequently Asked Questions

Does —sref work with all Midjourney versions?

—sref was introduced in v5.2 and works with all subsequent versions. Behavior is most refined in v6 and v6.1. Earlier versions may interpret style references less consistently.

Can I use AI-generated images as style references?

Yes. Any image accessible via URL can be used as a —sref, regardless of how it was created. AI-generated images often work well because they tend to have strong, clear stylistic characteristics.

How many —sref images can I combine?

There is no hard limit, but practical results degrade beyond 3-4 references. The more references you combine, the more diluted each influence becomes. For most work, 1-2 references produce the best results.

Does —sref affect generation speed?

Slightly. The model needs to analyze the reference image in addition to processing the prompt, which adds a small amount to generation time. The difference is negligible for most workflows.

Can I save —sref codes permanently?

Random style codes (numeric) persist indefinitely and can be reused at any time. Image URLs remain valid as long as the source image is accessible. For long-term projects, host reference images on your own domain or a stable CDN.

Does the reference image quality matter?

Yes. Higher resolution reference images provide more detail for the model to analyze. However, the content of the image matters more than the resolution. A well-composed 1024x1024 reference will outperform a blurry 4K image.

Can —sref replicate a specific artist’s style?

It can capture general characteristics of an artist’s work (color palette, texture, composition patterns), but it is not designed for exact replication. For ethical reasons, avoid using —sref to closely replicate living artists’ distinctive styles without permission.

Explore More Tools

Grok Best Practices for Academic Research and Literature Discovery: Leveraging X/Twitter for Scholarly Intelligence Best Practices Grok Best Practices for Content Strategy: Identify Trending Topics Before They Peak and Create Content That Captures Demand Best Practices Grok Case Study: How a DTC Beauty Brand Used Real-Time Social Listening to Save Their Product Launch Case Study Grok Case Study: How a Pharma Company Tracked Patient Sentiment During a Drug Launch and Caught a Safety Signal 48 Hours Before the FDA Case Study Grok Case Study: How a Disaster Relief Nonprofit Used Real-Time X/Twitter Monitoring to Coordinate Emergency Response 3x Faster Case Study Grok Case Study: How a Political Campaign Used X/Twitter Sentiment Analysis to Reshape Messaging and Win a Swing District Case Study How to Use Grok for Competitive Intelligence: Track Product Launches, Pricing Changes, and Market Positioning in Real Time How-To Grok vs Perplexity vs ChatGPT Search for Real-Time Information: Which AI Search Tool Is Most Accurate in 2026? Comparison How to Use Grok for Crisis Communication Monitoring: Detect, Assess, and Respond to PR Emergencies in Real Time How-To How to Use Grok for Product Improvement: Extract Customer Feedback Signals from X/Twitter That Your Support Team Misses How-To How to Use Grok for Conference Live Monitoring: Extract Event Insights and Identify Networking Opportunities in Real Time How-To How to Use Grok for Influencer Marketing: Discover, Vet, and Track Influencer Partnerships Using Real X/Twitter Data How-To How to Use Grok for Job Market Analysis: Track Industry Hiring Trends, Layoff Signals, and Salary Discussions on X/Twitter How-To How to Use Grok for Investor Relations: Track Earnings Sentiment, Analyst Reactions, and Shareholder Concerns in Real Time How-To How to Use Grok for Recruitment and Talent Intelligence: Identifying Hiring Signals from X/Twitter Data How-To How to Use Grok for Startup Fundraising Intelligence: Track Investor Sentiment, VC Activity, and Funding Trends on X/Twitter How-To How to Use Grok for Regulatory Compliance Monitoring: Real-Time Policy Tracking Across Industries How-To NotebookLM Best Practices for Financial Analysts: Due Diligence, Investment Research & Risk Factor Analysis Across SEC Filings Best Practices NotebookLM Best Practices for Teachers: Build Curriculum-Aligned Lesson Plans, Study Guides, and Assessment Materials from Your Own Resources Best Practices NotebookLM Case Study: How an Insurance Company Built a Claims Processing Training System That Cut Errors by 35% Case Study