Consistent AI Characters with WAN 2.2 LoRAs
Guide

Easily Generate Consistent AI Characters with WAN 2.2 LoRAs

Jun 2025 · 9 min read · LoRA Training · WAN 2.2 · AI Toolkit · RunPod

Getting a character to look exactly the same across different images — and even across videos — is one of the hardest consistency challenges in AI generation. In this guide we'll use AI Toolkit to train a character LoRA specifically for the WAN 2.2 models, giving you reusable, reliable characters you can drop into any workflow.

What Is AI Toolkit?

AI Toolkit is an open-source LoRA training framework that makes it straightforward to train custom LoRA models that influence generation toward a specific style or character. It has first-class support for the latest WAN 2.2 models, which means a character LoRA you train here will work for both still image generation and video clips.

🎭
Character Consistency
Same face, hair, and style across unlimited scenes and poses
🎬
Images + Video
WAN 2.2 14B model supports both still images and video generation
☁️
Cloud or Local
Train on RunPod (24 GB+ VRAM recommended) or your own hardware
🖱️
One-Click Installer
Local Windows install available via GitHub one-click BAT file
VRAM requirement: Training WAN 2.2 LoRAs requires a minimum of 24 GB VRAM. If your local GPU doesn't meet that, RunPod is the recommended alternative — RTX 5090 (32 GB) works great.

Option A — Local Install (One-Click)

If you have 24 GB+ of VRAM locally, GitHub user Tavers1 provides a one-click Windows installer for AI Toolkit:

  1. Go to the AI Toolkit Easy Install GitHub repository.
  2. Navigate to the Releases page and download the ZIP file.
  3. Extract the ZIP into its own folder.
  4. Double-click the .bat file — it handles all dependencies automatically.

Option B — RunPod (Recommended for Most)

For everyone else, RunPod provides on-demand GPU access without a large upfront investment.

New user bonus: RunPod is currently offering a credit bonus for new sign-ups — spend $10 via a referral link and receive between $5–$500 in random credit. Check the video description for the link.
  1. Select a GPU pod. Sign in to RunPod, open the Pods submenu, and select a GPU. The RTX 5090 (32 GB) is ideal for WAN 2.2 LoRA training.
  2. Choose the template. Click Change Template and search for Oris AI Toolkit Official Community. Select it, then click Deploy.
  3. Open the UI. Once the HTTP service port turns green, click it. If prompted for a password, enter password.

Step 1 — Prepare Your Dataset

A good dataset is the foundation of a good LoRA. For WAN 2.2 character training:

Once your images are ready:

  1. In the AI Toolkit UI, navigate to the Datasets tab.
  2. Click New Dataset and give it a name.
  3. Drag and drop your images into the upload field.

Adding Captions

For character LoRAs, you don't need elaborate per-image captions. A single, unique trigger word applied to all images works perfectly — this is the word you'll use later in prompts to activate the character.

Example trigger word: sarah_taylor — short, unique, and unlikely to conflict with anything in the model's base training data.

Step 2 — Configure the Training Job

Navigate to the New Job menu in AI Toolkit. Here are the key settings:

Setting Recommended Value Notes
LoRA file name Your character name This becomes the output filename
Trigger word Same as dataset captions Must match exactly
Model architecture wan2.2_14b (normal) Works for image + video generation
Low VRAM ✅ Checked Prevents OOM errors during training
Noise model Low-noise only (recommended) Training both works but takes longer; low-noise gives stable results
Training steps 2500–2750 Default 3000 can overfit; 2500–2750 often gives cleaner results
Leave other settings at defaults unless you have specific experience with LoRA training hyperparameters — the defaults are well-optimized for WAN 2.2.

Step 3 — Set Up Sample Image Generation

AI Toolkit can generate test images at different points during training so you can monitor progress visually.

Step 4 — Start Training

  1. Click Create Job to save your configuration.
  2. Click the Play button in the top menu bar to start training. AI Toolkit will download the required WAN 2.2 models and then begin.
  3. Monitor the Samples tab — test images are generated at regular intervals, giving you a visual sense of how well the LoRA is learning your character.
What to look for: When the person in sample images consistently resembles your character from the dataset, the LoRA is converging well. If it's still inconsistent at step 2000+, consider increasing your dataset size or diversity.

Step 5 — Download and Use Your LoRA

AI Toolkit saves checkpoint files at regular intervals during training. Once you're happy with a checkpoint:

  1. Open the Checkpoints panel and download your preferred checkpoint (the final one or whichever looked best in samples).
  2. Place the .safetensors LoRA file into your ComfyUI models/loras/ folder or your Forge UI models/Lora/ folder.
  3. In your workflow, load the LoRA and use your trigger word in the prompt to activate the character.

The LoRA works for both still images and video clips with WAN 2.2, so once trained you can generate your character in any scene, pose, or motion sequence.

Ready-made LoRAs: If you'd rather skip the training entirely, Patreon Gold tier members receive a free custom WAN or Flux LoRA trained by the channel every single month.

Tips for Better Results

📦 Want to skip the setup?

The Local Lab offers pre-configured AI installer packages so you can get running in minutes, not hours.

Get the Installer →