Dijkstra Called It in 1978: Natural Language Programming Is Still Foolish
Created: 2026-03-09 | Size: 9495 bytes
TL;DR
In 1978, Edsger Dijkstra wrote a short essay arguing that programming in natural language would be a disaster, not because machines couldn't parse it, but because natural language is fundamentally hostile to precision. Formal symbolism isn't a burden; it's the tool that lets schoolchildren do what once required genius. Nearly five decades later, as LLMs turn English prompts into code, Dijkstra's core argument hasn't aged a day. The tension between ambiguity and correctness is still the bottleneck.
The Essay
"But a moment is a long time, and thought is a painful process." - A.E. Housman, quoted by Dijkstra
EWD667, titled "On the foolishness of 'natural language programming'", runs about 1,200 words. It's vintage Dijkstra: no citations, no data, just a razor-sharp argument delivered with the confidence of someone who has thought about this longer than most people have been alive.
The setup: since the earliest days of computing, people have resented the strictness of machines. They wished for "more sensible" computers that would catch obvious mistakes instead of executing them literally. High-level languages helped, turning some silent wrong answers into error messages, but programming remained a formal, symbolic activity requiring precision.
The proposed fix: let people instruct machines in natural language. Shift the burden to the machine. Sounds reasonable.
Dijkstra's response: no.
The Narrow Interface Argument
Dijkstra makes an underappreciated systems-design point. Changing an interface isn't just a reallocation of fixed work between two parties. The communication overhead of the interface itself must be added to both sides.
A wider interface, like natural language, doesn't just make the machine's job harder. It makes the human's job harder too, because now you have to worry about whether the machine interpreted your ambiguous instructions correctly. This is why engineers prefer narrow interfaces: they minimize the surface area for misunderstanding.
This maps directly to the experience of anyone who has used an LLM for coding. The prompt is the wide interface. The generated code is the narrow one. And the gap between them, the interpretation layer, is where bugs are born.
It's also the same insight behind the case for AI specialization over AGI: breadth is overrated. A system that tries to understand everything you might mean will understand nothing reliably. Narrow interfaces, narrow capabilities, are what make systems trustworthy.
Mathematics Already Proved This
Dijkstra's strongest argument is historical. The progression of mathematics is a 2,000-year case study in escaping natural language:
- Greek mathematics got stuck because it remained verbal and pictorial
- Moslem algebra made a timid attempt at symbolism, then died when it reverted to rhetoric
- Modern mathematics only emerged when Vieta, Descartes, Leibniz, and Boole designed formal symbolisms
The pattern is unambiguous: every major leap in mathematical capability came from replacing natural language with formal notation. The symbols didn't make math harder. They made it possible.
"The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid."
This is the key insight. Formal notation isn't a barrier to entry; it's a nonsense filter. Natural language is the opposite: it's optimized for producing statements whose nonsense isn't obvious.
The Privilege of Formalism
Dijkstra flips the usual framing. Instead of treating formal notation as a burden that alienates beginners, he calls it a privilege:
"Thanks to them, school children can learn to do what in earlier days only genius could achieve."
This is worth sitting with. Algebraic notation didn't make math elitist. It democratized it. Before symbolic algebra, solving a quadratic equation required the intuition of a trained scholar. After it, any student with a textbook could follow the formula.
The parallel to programming is exact. A well-designed type system or a strict compiler isn't punishing you. It's doing the hard work of catching nonsense before it reaches production. The people who complain about strict compilers are, as Dijkstra notes, the same people who "equate 'the ease of programming' with the ease of making undetected mistakes."
Where Dijkstra Was Wrong
Let's give credit where it's due: Dijkstra was wrong about one thing. He imagined that building machines you could instruct in natural language would require "a few thousand years" of bootstrapping from informal language to formal systems, mirroring the slow arc of mathematical history. Transformer-based models collapsed that timeline to under a decade.
That's a genuine surprise. The engineering problem he thought was nearly impossible turned out to be tractable. But notice what LLMs actually solved: they made natural language input feasible. They did not make natural language precise. The benchmark-to-production gap in LLM code generation, where models score 88% on synthetic tasks but hit 30% on real-world class-level code, is exactly the phenomenon Dijkstra predicted. The ambiguity isn't in the machine's parser. It's in the human's prompt.
The failure mode isn't syntax errors. It's semantic drift: the LLM generates code that is plausible, compiles, and does something subtly different from what you meant. This is the natural-language nonsense that Dijkstra warned about, dressed up in Python.
The New Illiteracy, Updated
Dijkstra included a prescient aside about what he called "The New Illiteracy", the declining ability of educated people to use their own language effectively. He pointed to "meaningless verbiage in scientific articles, technical reports, government publications."
This has only accelerated. LLMs can now generate that meaningless verbiage at scale. The irony is thick: we've built machines that are extremely good at producing fluent natural language, and the result is an explosion of text that says nothing. Dijkstra would not be surprised.
For programming specifically, the risk is that vibe coding, writing English prompts and accepting whatever the LLM produces, creates a generation of developers who can describe what they want but can't verify what they got. The formal symbolism that Dijkstra championed is exactly the tool needed to close that verification gap.
The Takeaway for 2026
Dijkstra's essay isn't an argument against using LLMs for coding. It's an argument for understanding what you lose when you abandon formal precision for natural-language convenience:
- Formal notation is a nonsense filter. Natural language is optimized for ambiguity. Code is not. Don't confuse the ease of writing a prompt with the ease of getting correct code.
- Wider interfaces increase work on both sides. The time you save writing a prompt, you spend reviewing, debugging, and re-prompting. The interface got wider; the total work didn't shrink.
- Verification requires formalism. You cannot verify natural-language specifications against natural-language outputs. At some point, someone has to read the code. That someone should be you.
- The privilege of symbolism still holds. Type systems, compilers, linters, and formal specs are not obstacles to productivity. They are the machinery that catches nonsense before your users do.
The most dangerous belief in AI-assisted development isn't that LLMs can't code. It's that you no longer need to.
Or as Dijkstra put it with characteristic bluntness:
"I suspect that machines to be programmed in our native tongues - be it Dutch, English, American, French, German, or Swahili - are as damned difficult to make as they would be to use."
He was half-wrong. We made them. They're still damned difficult to use well.
About Dijkstra
Edsger W. Dijkstra (1930–2002) was a Dutch computer scientist whose contributions shaped the field: the shortest-path algorithm that bears his name, the concept of structured programming, the semaphore for concurrent process synchronization, and the "THE" multiprogramming system. He won the Turing Award in 1972. He is also remembered for his prolific handwritten manuscripts (numbered EWD0 through EWD1318) in which he argued, often sharply, for mathematical rigor in programming. EWD667, the essay discussed here, is one of them.
References
- On the Foolishness of "Natural Language Programming" (EWD667) - Edsger W. Dijkstra, 1978
- Your LLM Scores 88% on Code Benchmarks. In Production, It Hits 30%. - Daita blog
- Forget AGI: The AI That Folds Proteins Should Not Fold Your Laundry - Daita blog