Part 1 of 3 — Engineering Intent Series - Stop Prompting, Start Compiling: The Path to Predictable AI-Generated Code
If you use LLMs to generate code, you are likely dealing with a "Slot Machine" workflow. You pull the lever with a prompt, get a great result, and then — two days later, on a different model, or wi...

Source: DEV Community
If you use LLMs to generate code, you are likely dealing with a "Slot Machine" workflow. You pull the lever with a prompt, get a great result, and then — two days later, on a different model, or with a different colleague — the same request produces something completely different. Different patterns, different variable names, different bugs. In software engineering, this inconsistency has a name: The Ambiguity Tax. The Root Cause: Conflating Intent with Implementation The problem isn't that the AI is hallucinating. The problem is that natural language is inherently ambiguous. Take this common request: "Implement a user profile page with validation." To a human — or an AI — this leaves a dozen critical questions unanswered: Is validation client-side, server-side, or both? Does "profile" include avatar uploads or just text fields? What happens to the UI while the save is in progress? Is state managed locally or via a global store? What does a validation error look like — inline, toast, m