Anthropic Gives Its AI Agents the Ability to Dream
Claude's Managed Agents now feature a 'dreaming' mode that reviews past work and updates memory on a schedule.
Anthropic just dropped a wild new feature for its Managed Agents platform: dreaming. Yes, really.
The new capability is a scheduled background process that lets Claude agents review their recent work and update their own memory. Think of it as an AI reflecting on what it's done — consolidating knowledge without being actively prompted.
Managed Agents, which launched in public beta back in April, runs AI agents directly on Anthropic's infrastructure. The dreaming feature is now available as a research preview, meaning it's still experimental territory.
The concept is notable because it moves agents beyond simple request-response patterns. Instead of only acting when poked, Claude agents can now autonomously process and organize information during downtime. It's a step toward more persistent, self-improving AI systems that maintain context across sessions.