Anthropic Launches AI Agent Teams to Review Your Code
Claude Code gets a new Code Review feature where AI agents collaborate to catch bugs in pull requests.
Anthropic just dropped a Code Review feature for Claude Code, and it's not your typical linting tool. This one deploys AI agents that work together in teams to scrutinize pull requests for bugs and issues.
The feature is currently available as a research preview, so expect rough edges. But the early numbers are compelling — Anthropic's internal testing showed the system tripled the amount of meaningful code review feedback compared to standard reviews.
That's a significant jump. Code review is one of the biggest bottlenecks in software development, and most developers treat it as a chore. Having AI agents tag-team pull requests could free up serious engineering hours.
The multi-agent approach is notable. Rather than a single model scanning code, multiple agents collaborate to catch what one alone might miss. Anthropic is betting teamwork works for AI too.