All LoRA Guides

LoRA Weight Tuning

The lora_scale parameter (0.0–1.0+) controls how strongly a LoRA affects your generation. Too low and the LoRA has no visible effect; too high and images become oversaturated or distorted. The ideal weight depends on the LoRA type: character LoRAs work best at 0.8–1.0, style LoRAs at 0.3–0.6, and concept LoRAs at 0.6–0.8.

What lora_scale Does

At inference time, the LoRA's weight matrices are multiplied by lora_scale before being added to the base model's weights. This is a simple linear scaling operation: the effective weight update becomes lora_scale × BA, where B and A are the LoRA's low-rank matrices.

At 0.0

The LoRA is completely disabled — you get pure base model output. The LoRA weights are multiplied by zero, so they contribute nothing.

At 1.0

The LoRA is applied at full strength — exactly as it was trained. This is the default on most platforms and the intended operating point for the LoRA.

Above 1.0

Amplifies the LoRA beyond its trained strength. This can work for subtle LoRAs that need a boost, but often causes artifacts — oversaturated colors, distorted shapes, and incoherent compositions.

Below 0 (negative)

Actively suppresses the LoRA's features — the model does the opposite of what the LoRA was trained to produce. A character LoRA at −0.5 would push away from that character's features. Useful in multi-LoRA setups to counteract unwanted influence.

Recommended Weights by LoRA Type

Character LoRAs (0.8–1.0)

Character LoRAs encode specific facial features, body proportions, and identity markers. They need high weight to maintain likeness. Starting at 0.85 is usually safe. If the face looks "almost right but not quite," increase toward 1.0. Only reduce below 0.8 if the character is overpowering the scene composition.

Style LoRAs (0.3–0.6)

Style LoRAs affect the entire image aesthetic — colors, textures, rendering approach. They're powerful and a little goes a long way. Starting at 0.4 gives a noticeable stylistic shift without overwhelming the prompt. Push to 0.6+ for a stronger style lock. Above 0.7, the style may dominate and distort content elements.

Concept LoRAs (0.6–0.8)

Concept LoRAs teach specific objects, environments, or visual ideas. They need moderate weight to be recognizable. Start at 0.7. If the concept is barely visible, increase. If it's distorting unrelated parts of the image, decrease.

Realism LoRAs (0.5–0.8)

Realism enhancement LoRAs like XLabs-AI/flux-RealismLora improve photorealism across the entire image. Starting at 0.6 adds a subtle realism boost. Higher values increase photographic quality but may flatten artistic or stylized prompts.

Control LoRAs (0.8–1.0)

Control LoRAs (Canny, Depth, Pose) provide structural guidance and need high weight to accurately follow the control signal. Reducing weight below 0.7 usually breaks the structural alignment. These LoRAs work differently from content LoRAs — they constrain the spatial layout rather than adding visual features.

Troubleshooting

LoRA has no visible effect

  • Weight too low — increase lora_scale toward 1.0
  • Missing trigger word — check the model page for required trigger words and include them in your prompt
  • Base model mismatch — Flux LoRAs don't work on SDXL models and vice versa; the weight dimensions are incompatible
  • Wrong parameter name — ensure you're using the correct field (lora_scale, not strength)

Image looks oversaturated or distorted

  • Weight too high — reduce lora_scale by 0.1–0.2
  • Multiple LoRAs exceeding total weight of 2.0 — reduce individual weights so the sum stays under 2.0
  • Conflicting LoRAs — remove one and test individually to isolate the problem

Character looks "almost right" but slightly off

  • Increase character LoRA weight toward 1.0
  • Reduce style LoRA weight if combined — style may be overriding facial features
  • Add more descriptive terms about the character in the prompt to reinforce the LoRA's training

Style is too subtle

  • Increase style LoRA weight by 0.1 increments
  • Add style-related keywords to the prompt alongside the trigger word
  • Try reducing guidance_scale — high CFG can override LoRA influence by pulling the output closer to the text prompt and away from the LoRA's learned distribution

The Tuning Process

  1. Start with the recommended weight for your LoRA type (see the table above). This gives you the highest chance of a good result on the first try.
  2. Generate 2–3 images at that weight to establish a baseline. A single image can be misleading due to normal variation between seeds.
  3. Adjust by 0.1 increments — small changes have visible effects. Moving from 0.7 to 0.8 is often the difference between "barely there" and "clearly present."
  4. If combining LoRAs, tune the primary LoRA first at its ideal weight, then add the secondary LoRA and adjust its weight while keeping the primary fixed.
  5. Use the same seed across weight experiments to see the effect of weight changes in isolation. Changing the seed between comparisons adds noise to your observations.
  6. Save your final settings as a preset in the Integrations modal so you can reproduce the configuration without re-tuning.

Tips

  • The "right" weight varies per LoRA — two character LoRAs trained differently may need different scales. Training data quality, number of training steps, and rank all affect the optimal inference weight.
  • Some LoRA creators include recommended weights in their model description — always check before tuning from scratch.
  • When tuning, change only one variable at a time (weight, prompt, or seed) so you can isolate the effect of each change.
  • If you're combining LoRAs and something looks wrong, test each LoRA individually at weight 1.0 first to understand what each contributes on its own.