On my first day back from vacation, a colleague mentioned he had merged a fix for Android that should also work on iOS. Since I had a Mac with Xcode available, I offered to take over the iOS side.
The task required a native Objective-C implementation in a Qt-based application to adjust OS behavior similarly to Android. With Claude Code, I was able to move from problem description to working implementation very quickly, then manually test and demo the result.
The surprise came immediately after: the same behavior was not actually working correctly on Android either, so I was asked to adapt that implementation as well.
Under normal circumstances, I would have pushed back on that kind of scope expansion. But with an LLM acting as a coding agent, the cost of implementation had dropped enough that I simply continued. I piloted Claude Code again, updated Android, tested the changes, and opened a merge request.
That is where the real friction appeared.
The code has now been waiting for review for days, mainly because almost nobody on the team can confidently review it. This highlights a growing gap in modern development: LLMs already reduce the effort required to implement fixes in unfamiliar areas, but review processes and team skill distribution have not caught up.
The next day, the same pattern repeated. I was asked to support testing in an area I had not worked on directly. Again, I relied on Claude Code to navigate the context, found a defect in master, fixed it in minutes, and opened another merge request. Even for a very small change, I had the impression that some of the domain developers did not fully understand the fix, while the LLM handled the code context without difficulty.
What changed was not only speed. What changed was the economics of effort.
Agentic coding makes tasks feel cheaper to take on: unfamiliar platforms, small defects, cross-stack fixes, even moderate feature creep. The human role shifts from writing every line to steering, validating, and integrating.
But that also exposes a new bottleneck. If implementation becomes easy while human review capacity remains limited, delivery slows down for a different reason: not because teams cannot build, but because they cannot confidently validate - of course I used a review agent instead.
That is the real lesson. LLMs are already making teams faster at producing code. The harder challenge now is building the review culture, trust model, and technical ownership needed to keep up.