Using Claude Code: The Unreasonable Effectiveness of HTML

Anthropic researcher Thariq Shihipar makes a case for HTML over Markdown when requesting structured outputs from Claude, arguing that richer markup unlocks more sophisticated formatting and interactivity in LLM responses. The piece surfaces a practical but underexplored design pattern: Claude's artifact system can render complex layouts, inline annotations, and dynamic content more effectively through HTML than plain text alternatives. For builders integrating Claude into workflows that demand polished, information-dense outputs (code reviews, data analysis, documentation), this reframes how to architect prompts for maximum clarity and usability.
Modelwire context
ExplainerThe argument isn't just about aesthetics. Choosing HTML over Markdown when prompting Claude is a structural decision that affects what the model's artifact renderer can actually execute, meaning the output format shapes the model's effective capability ceiling for a given task, not just its appearance.
This sits in an interesting relationship with the 'Quoting Anthropic' piece from Simon Willison earlier this month, which covered Anthropic's own research surfacing gaps between how Claude behaves and how builders assume it behaves. That piece was about alignment blind spots; this one is about output architecture blind spots. Both point to the same underlying issue: practitioners are often working with incomplete mental models of how Claude actually processes and renders information. The HTML-versus-Markdown distinction is a low-cost, high-leverage variable that most prompt engineers haven't systematically tested, which makes it genuinely useful signal rather than theoretical advice.
Watch whether Anthropic formalizes HTML as a recommended output format in its official Claude prompting documentation within the next few months. If it does, that confirms this pattern has moved from researcher observation to supported practice. If it doesn't, the technique remains a community heuristic without institutional backing.
Coverage we drew on
- Quoting Anthropic · Simon Willison
This analysis is generated by Modelwire’s editorial layer from our archive and the summary above. It is not a substitute for the original reporting. How we write it.
MentionsClaude · Anthropic · Thariq Shihipar · HTML · Markdown
Modelwire Editorial
This synthesis and analysis was prepared by the Modelwire editorial team. We use advanced language models to read, ground, and connect the day’s most significant AI developments, providing original strategic context that helps practitioners and leaders stay ahead of the frontier.
Modelwire summarizes, we don’t republish. The full content lives on simonwillison.net. If you’re a publisher and want a different summarization policy for your work, see our takedown page.