Update, January 4, 2026:
After her comments drew widespread attention, Google principal engineer Jaana Dogan clarified her comparison between Anthropic’s Claude Code and an internal Google project.
Dogan says Google has built several versions of a distributed agent orchestration system over the past year, each with different tradeoffs and no clear winner. When she prompted Claude Code with the best surviving ideas, she found that coding agents can generate a workable toy version of the system in about an hour.
“What I built this weekend isn’t production grade and is a toy version, but a useful starting point,” she wrote. Dogan adds that she was surprised by the quality of the result, given that she had not provided detailed design instructions, and that Claude Code still surfaced “good recommendations.”
According to Dogan, the hard part is the years of experience required to shape durable patterns and product ideas. Once that knowledge exists, rebuilding systems becomes far easier.
“It’s totally trivial today to take your knowledge and build it again, which wasn’t possible in the past,” she wrote, arguing that starting from scratch can also free new systems from legacy constraints.
Original article, January 3, 2026:
A senior Google engineer says Anthropic’s Claude Code produced, in about an hour, a working system similar to one her team has been developing since last year.
Dogan, a principal engineer at Google responsible for the Gemini API, wrote on X that she gave Claude Code a problem description related to distributed agent orchestrators — systems that coordinate multiple AI agents — and received an implementation that broadly matched the direction of her team’s work.
She noted that the prompt consisted of only three paragraphs and was based on a simplified version of Google’s internal ideas, since she could not share proprietary details. Google, she said, had tried multiple approaches to the orchestration problem without reaching an internal consensus.
Dogan acknowledges that the output from Claude Code is not perfect and requires refinement, but calls it strong enough to convince skeptics to test coding agents in fields where they have deep expertise.
When asked whether Google itself uses Claude Code, she replied that it is permitted only for open-source projects, not internal code. Asked when Google’s Gemini system might reach a similar level, she answered: “We are working hard right now. The models and the harness.”
Dogan also pushed back on framing AI development as a winner-takes-all race. She wrote that the industry “has never been a zero-sum game” and that it makes sense to acknowledge rival advances.
“Claude Code is impressive work, I’m excited and more motivated to push us all forward,” she added.
Rapid progress in AI coding tools
Dogan outlined how quickly AI-assisted programming has evolved in recent years. In her view, tools have progressed through four stages:
- 2022: Completing individual lines of code.
- 2023: Filling in entire code sections.
- 2024: Working across multiple files and building simple applications.
- 2025: Creating and restructuring entire codebases.
She wrote that in 2022 she doubted the 2024 capabilities could scale to a global developer product, and that in 2023 the current level of performance still seemed about five years away. “Quality and efficiency gains in this domain are beyond what anyone could have imagined so far,” she concluded.
Claude Code creator shares usage strategies
Around the same time, Boris Cherny, the creator of Claude Code, shared his recommendations for getting better results from the tool.
His main guideline is to give Claude ways to verify its own work. This feedback loop, he says, can double or triple output quality. Cherny suggests starting most sessions in a planning mode and iterating with Claude until the plan is clear, after which the model can usually complete the task in a single run.
For repeated workflows, Cherny uses slash commands and subagents that automate tasks such as simplifying code or running tests. For longer or more complex assignments, he runs background agents to review Claude’s output and often launches multiple Claude instances in parallel to tackle different parts of a project at once. He lists Opus 4.5 as his default model.
During code reviews, Cherny’s team tags Claude directly in colleagues’ pull requests to generate documentation. He says Claude Code can also be integrated with tools such as Slack, BigQuery for data analysis, and Sentry for monitoring error logs.
