The Psychological Weirdness of “Prompt Engineering”

It’s like playing Taboo with an alien creature

Clive Thompson
6 min readOct 20, 2022

--

Midjourney creation from the prompt “pac-man screen, pac man, ghosts, inky, blinky, clyde, pac man maze, pac man, in the style of mondrian, modern art, modernism, bloom”

“Prompt engineering” is a funny phrase, isn’t it?

It’s the term of art for when you feed a textual prompt to an AI (like DALL-E or Midjourney ) to get it to produce a picture, or when you ask the code-generating AI Copilot to write some software.

Calling this process “engineering” makes it sound precise and logical. But if you go to the Midjourney Discord and watch people issuing prompts, and you’ll see stuff like this …

galaxy arising from a brain, 8k, octane render, micro detailed — upbeta — test — creative

my teeth are yellow, hello world :: would you like me a little better if they were white like yours — s 5000 — q 2 — upbeta — v 3

hg giger lovecraft nightmarish realm where monsters eternally reign terror

chaos corrupted the once valor knight, transforming them into a powerful villian. Horns bursted from their heads, wing and tails grew from their sides, fingers and toes grew into claws. this is what does the void does. this is how life loses….

There’s definitely a method to prompt-writing. But it feels less like a methodical form of engineering than someone casting about for the right magical incantations, having accidentally misplaced their spellbook and thus buttonmashing things a bit. Or perhaps more interestingly, prompt-writing seems like a human trying to coax an eager but befuddled pack-animal to do our bidding. We think it’s understanding what we’re saying? But we’re using a lot of jazz hands and excitable shouting to make sure.

This makes for a very strange moment in the history of AI. For decades, AI labored (not always, but often) under the shadow of the Turing Test, the idea that a “smart” AI would be one that behaved and communicated precisely like a clever human. Under the Turing idea, an artificial lifeform could be considered intelligent if it could, say, capably discuss current events. Most recently we’ve extended this clear, precise, natural-language expectation to everyday devices: We talk to Siri and Alexa in everyday cadences, asking for the weather or to set timers.

--

--

Clive Thompson

I write 2X a week on tech, science, culture — and how those collide. Writer at NYT mag/Wired; author, “Coders”. @clive@saturation.social clive@clivethompson.net