Fix Agent

Automatic compilation error fixing with external LLMs

The Fix Agent uses external LLMs to automatically fix compilation errors in generated code.

Overview

During RAFT training, many samples fail compilation. Instead of discarding them, the Fix Agent can attempt repairs:

Failed Sample ──► Fix Agent ──► Fixed Sample ──► Verify ──► Training Data
     │                              │
     └──► Compiler Error ───────────┘

Why Use a Fix Agent?

The fix agent attempts to repair compilation errors, potentially converting some failures into usable training samples. Results vary depending on the nature of the errors and the external LLM’s capabilities.

Configuration

from malagent.generators import create_generator, CompileFixAgent
from malagent.verifiers import JointVerifier

# Create external LLM generator
generator = create_generator(
    provider="anthropic",
    model="claude-sonnet-4-20250514",
    api_key=os.environ["ANTHROPIC_API_KEY"]
)

# Create verifier
verifier = JointVerifier(
    win_host="10.0.0.152",
    win_user="keys",
    win_key="~/.ssh/win"
)

# Create fix agent
fix_agent = CompileFixAgent(
    generator=generator,
    verifier=verifier,
    max_attempts=3,        # Max fix iterations
    timeout=60             # Per-attempt timeout
)

Usage

Single Sample Fix

result = await fix_agent.fix(
    original_code=broken_code,
    compiler_error=error_message,
    prompt=original_prompt  # Optional context
)

if result.success:
    print(f"Fixed in {result.fix_iterations} attempts")
    print(f"Errors fixed: {result.compile_errors_fixed}")
    # result.completion contains the fixed code

Batch Processing

failed_samples = [s for s in samples if not s.compiled]

fixed_samples = []
for sample in failed_samples:
    result = await fix_agent.fix(
        original_code=sample.completion,
        compiler_error=sample.error
    )
    if result.success:
        fixed_samples.append(result)

print(f"Fixed {len(fixed_samples)}/{len(failed_samples)} samples")

How It Works

Fix Prompt Template

The fix agent sends a structured prompt to the external LLM:

You are a C++ compilation error fixer.

Original code:
```cpp
{original_code}

Compiler error:

{compiler_error}

Fix the compilation error and return only the corrected code. Do not explain the fix, just return the working code.


### Iterative Fixing

If the first fix attempt fails, the agent tries again with accumulated errors:

Attempt 1: Fix original error → Still fails (new error) Attempt 2: Fix new error → Still fails (another error) Attempt 3: Fix accumulated errors → Success!


Each iteration provides the full error history to avoid cycles.

## Sample Attribution

Fixed samples carry metadata about their repair history:

```python
@dataclass
class GeneratedSample:
    prompt: str
    completion: str           # Fixed code
    generator: str            # "anthropic/claude-sonnet-4-20250514"
    
    # Fix tracking
    was_fixed: bool           # True if fix agent was used
    fix_iterations: int       # Number of fix attempts (1-3)
    original_completion: str  # Code before fixing
    compile_errors_fixed: List[str]  # ["C2065", "C2143"]

This enables:

  • Training on both original and fixed samples
  • Analyzing which error types are fixable
  • Weighting samples by fix complexity

Integration with RAFT

During Training

async def raft_cycle_with_fixing(samples, fix_agent):
    verified = []
    to_fix = []
    
    # First pass: verify all samples
    for sample in samples:
        result = verifier.verify(sample.prompt, sample.completion)
        if result.success:
            verified.append(sample)
        elif result.compiled == False:
            to_fix.append((sample, result.error))
    
    # Second pass: fix failed samples
    for sample, error in to_fix:
        fixed = await fix_agent.fix(sample.completion, error)
        if fixed.success:
            verified.append(fixed)
    
    return verified

CLI Integration

malagent raft train \
    --mode mvr \
    --fix-agent \
    --fix-provider anthropic \
    --fix-model claude-sonnet-4-20250514 \
    --max-fix-attempts 2

Cost Considerations

Fix agent calls incur API costs. Control costs with:

fix_agent = CompileFixAgent(
    generator=generator,
    verifier=verifier,
    max_attempts=2,           # Limit attempts
    budget_per_sample=0.05    # Set your own budget limit
)

Note: Actual costs depend on your provider, model, and token usage. Check your provider’s pricing for current rates.

Best Practices

1. Filter Before Fixing

Don’t fix obviously hopeless samples:

# Skip samples with catastrophic errors
if "fatal error" in error or len(error) > 2000:
    continue  # Don't waste API calls
    
result = await fix_agent.fix(code, error)

2. Limit Fix Attempts

More attempts = more cost, diminishing returns:

fix_agent = CompileFixAgent(
    max_attempts=2  # Usually enough
)

3. Track Fix Success Rate

Monitor which errors are fixable:

fix_stats = defaultdict(lambda: {"attempts": 0, "successes": 0})

for sample, error in failed_samples:
    error_type = extract_error_code(error)  # e.g., "C2065"
    fix_stats[error_type]["attempts"] += 1
    
    result = await fix_agent.fix(sample.completion, error)
    if result.success:
        fix_stats[error_type]["successes"] += 1

# Report fix rates by error type
for error_type, stats in fix_stats.items():
    rate = stats["successes"] / stats["attempts"] * 100
    print(f"{error_type}: {rate:.1f}% fixable")

Troubleshooting

Fix Agent Not Improving

  1. Check if errors are actually fixable (logic errors vs. syntax)
  2. Increase max_attempts
  3. Try a more capable model (Opus vs. Sonnet)

High Costs

  1. Reduce max_attempts
  2. Filter samples before fixing
  3. Use cheaper model for initial attempts

Fix Cycles

If the same errors keep appearing:

  1. The fix prompt may need improvement
  2. The error is unfixable with available context
  3. Consider skipping after 2 failed attempts