Not Every Utterance is Sacred

Humans have always found strange ways to make big decisions.

I.

In the 6th century BCE, people seeking answers to life’s big questions traveled to a remote mountainside in Greece to speak with a woman seated above a crack in the earth. She inhaled vapors that rose from the stone, slipped into a trance, and began to mutter. A team of temple priests—men trained in ritual and rhetoric—stood nearby to translate her broken words into something the rest of society could use.

The answers she gave were never direct. They were riddles, fragments, metaphors. But they carried enormous weight. Kingdoms rose and fell on her utterances. Leaders changed course. Wars were waged or averted based on her words.

One of the most famous consultations came during the Persian invasion of Greece. The Athenians, desperate for guidance, asked the Oracle how they could survive. Her answer was vague:

“A wooden wall will save you.”

Some leaders assumed this meant barricades, or perhaps reinforcing the Acropolis with timber. Themistocles, a sharp and stubborn general, saw it differently. A “wooden wall,” he argued, meant ships. Athens should fight at sea. His interpretation—neither obvious nor widely accepted—turned the tide of history. His navy lured the Persian fleet into the straits of Salamis, where their numbers counted for little, and crushed them.

Not every leader was so careful. Croesus, king of Lydia, asked the Oracle if he should wage war against Persia. The answer:

“If you cross the river, a great empire will fall.”

Croesus took it as a blessing. He assumed it meant Persia’s empire would crumble. It didn’t. The empire that fell was his.

The Oracle was never designed to give straight answers. It offered symbols, images, hints. The work—the real work—was in thinking about what those words could mean, and how they applied to the situation at hand. This wasn’t a bug in the system. It was the feature. The Oracle’s cryptic responses forced leaders to engage their own judgment, to debate with advisors, to consider multiple possibilities before acting.

Over centuries, societies gradually moved away from oracles, but they never abandoned the need for interpretation itself. They simply shifted it to new authorities: philosophers who reasoned through ethical dilemmas, legal scholars who parsed complex statutes, scientific experts who translated research into policy recommendations. The interpretive layer remained—it just changed form.

Then came something genuinely different.

II.

In the 1940s, humans refined sand into silicon, launching the computing revolution. Decades later, they discovered that training language models on vast datasets produced an unexpected result: systems that could respond to almost any question in fluent, confident prose. Unlike the Oracle’s cryptic fragments, these responses sounded complete, authoritative, ready to use.

This created a profound shift in how interpretive authority works. The Oracle came with interpretation baked into the experience—her fragmented utterances practically demanded careful analysis. But Large Language Models give us something unprecedented: responses that appear to eliminate the need for interpretation entirely. They speak in clear sentences, offer structured reasoning, present themselves as finished thoughts rather than raw material for thinking.

This is precisely what makes them dangerous. We’ve accidentally created a system that looks like it doesn’t need the interpretive work that every previous decision-making authority required. The fluent confidence of AI output masks the fact that it still requires the same careful analysis, contextual judgment, and skeptical questioning that humans applied to oracles. We’ve just lost the cultural cues that reminded us to do that work.

Consider two startup founders facing the same dilemma. Both ask ChatGPT: “Should I pivot my business?” The first founder gets a confident 500-word response about market validation and customer feedback. It sounds authoritative, comprehensive. She starts executing immediately—hiring consultants, redesigning products, shifting strategy. The AI said pivot, so she pivots. Six months later, her company is hemorrhaging cash, chasing a market the AI knew nothing about.

The second founder gets the same response. But instead of treating it as gospel, she uses it as a structured thinking prompt. The AI mentioned “customer feedback”—when did she last actually talk to customers? The response suggested “market validation”—what evidence does she have that people want this product? The AI didn’t solve her pivot question, but it surfaced gaps in her decision-making process. She spends the next month talking to customers, discovers they love her current product but hate her pricing model, and adjusts accordingly.

A famous IBM slide from the one of the first AI booms in the 1950s

If we treat AI as an oracle in the wrong way—assuming it hands down answers rather than definitive-sounding words—we risk losing the very thing that makes us good decision-makers: our judgment. And judgment is the thing we can’t afford to lose. As AI tools are able to do more and more things, we still have to decide what we want the AIs to do for us.

III.

At the Temple of Apollo, the most important guidance wasn’t spoken by the Oracle. It was carved into the stone above the entrance: “Know thyself.” AI will give you language: fluent, confident, sometimes useful. But the most important thing isn’t what it says. It’s knowing yourself well enough to know what you should do with what it says.

Thanks to readers of earlier drafts: Cansafis Foote, Jack Purdy