Incantation

Telling computers what to do has always been a job for specialists — almost by definition it is an act of translation, of adaptation, of understanding two worlds and bridging the gap between them.

Some academic computer science folks spend inordinate amounts of time on the computer-side of that divide, figuring out better algorithms and more efficient mechanisms for what happens behind the curtain. But the vast majority of computer programmers and developers and even tinkerers spend much more time straddling the divide. Their work is understanding human needs and processes and tasks and mental models, then translating them into the inputs and decisions and outputs that a computer will wend its way through to accomplish specific tasks.

Even when the system a programmer interacts with is so complex, so poorly documented, so hermetically sealed that it feels non-deterministic, there is a baseline understand that it is the artifact of other humans’ past decisions. Jokes about secret incantations being necessary to make the system work are jokes about knowledge of mechanisms lost to time, not a description of real Words Of Power. Not a description of true communication between human and machine.

LLMs twist that normal understanding around, in ways I don’t believe our popular or professional cultures have mental models or vocabularies for. The questions we ask these conversational tools, the commands we give them, the outputs we receive from them, are not translated to and from a precise, deterministic language. What we get from them is fuzzy and organic-feeling and startling. (Under the hood, of course, it is very much that precise and deterministic stuff — still code, still math, that is — but our interface to it is not.)

Coaxing and directing and nudging an LLM in one direction or another feels (at least, in the conversational interface most users are currently presented with) like a matter of voice and tone, an act of convincing and cajoling rather than specification and testing. Tinkerers and researchers encourage users to add “emotional language” to their queries. To “explain how important the question is.” To lie, or to promise a reward for a job well done.

To convince the machine.

Behind the scenes, as always, this collapses into complex but ultimately straightforward math. LLMs have been trained to reproduce the shape and structure of the conversations we humans have with each other, to echo the messages we use to coax and cajole and convince other humans.

“Please answer clearly, my job is on the line,” then, is not a desperate plea to another thinking, feeling being. It is a search filter once removed, narrowing a range of probable responses by mimicking past emergencies.

“You are an expert salesperson, confident in the quality of your product and skilled at listening to customers to understand their needs before recommending a solution.”

A prompt is not a persona, a character we have “convinced” the machine to slip into. It is a vector — a particular line speared through the sea of syllables and sentences, successful if it pierces the fish but otherwise meaningless. We know what’s “really” going on because we can crack open the black box and poke at its mechanisms. We can ignore the conversational conceit and use the LLM’s embedding mechanisms to turn words in those vectors, the syntactical force-lines of meaning that make its mimicry work. The work that lies behind the conversational curtain isn’t easy; very little is at scale. But it is mechanical, deterministic, and predictable. It makes sense, the way search indexes and taxes and calculus make sense.

The operative word in “magic trick” is second one. Revealing the mechanism, turning the box around to reveal the mirrors, changing the camera angle to expose the sleight of hand — those are acts of deconstruction that slice off the magic and leave the trick. The best magic tricks, then, are the ones that retain their ability to amaze even after the secret has been shared. “The box is split with a mirror, and he’s got the flower up his sleeve,” your friend says. And yet, once the lights are down you watch with your eyes and listen with your ears and it’s still magic.

For now, “AI” very much maintains its ability to amaze. To convince people to believe the magician’s assistant is floating, the cut rope was made whole, the cancer was diagnosed. And it was. Or, it looked like it. Or, it was basically the same thing, for the purposes of the demo. Like wrestling kayfabe, magic isn’t all pretend. The actual work of it is still difficult stuff; the muscles and the injuries and the business of it all is real, even if the agreement to treat it as Something A Bit More keeps everyone excited.