Vibe coding and agentic engineering are getting closer than I'd like

Simon Willison reflects on the convergence of two AI coding paradigms: vibe coding (intuitive, exploratory prompt-based development) and agentic engineering (autonomous agent-driven workflows). His observation surfaces a critical inflection point in how developers interact with AI tools, where informal experimentation and structured agent orchestration are blurring together. This convergence signals that the boundary between ad-hoc AI assistance and systematic autonomous systems is collapsing, reshaping expectations for what constitutes legitimate engineering practice in an agentic era.
Modelwire context
Analyst takeWillison's discomfort is the signal worth tracking: the convergence he describes isn't just aesthetic, it implies that the informal risk tolerance of vibe coding (move fast, accept hallucinations, iterate) is bleeding into agentic workflows that carry real production consequences.
This connects directly to two threads in recent coverage. The arXiv position paper from May 1st arguing that agentic orchestration should be Bayes-consistent identified the same structural problem from the opposite direction: current agent control layers are ad-hoc precisely because they inherited assumptions from exploratory LLM use rather than from principled engineering. Meanwhile, Willison's own iNaturalist project from that same week, built entirely on a phone via Claude Code, is a live demonstration of the blurring he now says concerns him. He was vibe coding an agentic tool in real time. The gap between those two moments, one practical and celebratory, one cautionary, is where the actual tension lives for practitioners.
Watch whether tooling vendors like OpenAI (Codex) or Anthropic introduce explicit guardrails or workflow-stage labeling that formally separates exploratory from production-grade agentic use within the next two quarters. If they don't, Willison's concern becomes a default industry posture rather than a correctable design choice.
This analysis is generated by Modelwire’s editorial layer from our archive and the summary above. It is not a substitute for the original reporting. How we write it.
MentionsSimon Willison · Heavybit · Joseph Ruscio
Modelwire Editorial
This synthesis and analysis was prepared by the Modelwire editorial team. We use advanced language models to read, ground, and connect the day’s most significant AI developments, providing original strategic context that helps practitioners and leaders stay ahead of the frontier.
Modelwire summarizes, we don’t republish. The full content lives on simonwillison.net. If you’re a publisher and want a different summarization policy for your work, see our takedown page.