Prompt Engineering for Humans
AWM #90: A brief list of magical formulas to actually write better 🧙♂️
They say software is eating the world. But software is just an elaborate form of language. Sure, it’s made of weird sentences like "if x==2 {return true} else {return false}"
which are difficult to read for humans. But language is just the art of ordering pieces of information in a way that can transmit a meaning, and that’s what computer code is. So really, it’s language that’s eating the world, turning everything from data collection to spaceship manufacturing into information products.
Lately the eating has become even less metaphorical than it used to be. With language models such as GPT-3 and DALL•E — and many more that have been talked about less than these two — we can now accomplish a wide variety of tasks by just telling a program, in a natural language like English, what we want it to do.
Imagine that for some reason you need a short write-up on what’s going on between China and Taiwan. You could ask someone, “please write a 200-word report on the China-Taiwan situation,” give them some time (a few hours, maybe a few days) and then you’ll get your report. With language models, you can do the same thing with a computer. You input that exact same prompt into the program, you give it some time (a few seconds, maybe a few minutes) and then you’ll get your report.
The report may be of good quality, or not. It will be original in the sense that it won’t exactly duplicate existing writing on China and Taiwan, but it’ll also be mostly a remix of existing ideas. It will probably require some editing if you’re going to use it for something formal.
Note that that last paragraph applies equally to the human-generated report and the AI-generated one.
I don’t know whether there’s something fundamentally different between AI and human cognition. But certainly language models are getting pretty good at reproducing our mental processes. Which means that they can teach (or reteach) us things about ourselves. One such thing is prompt engineering.
Prompt engineering is the art of tweaking the prompt you give to the program in order to get the results you want. You could say in what style you want a picture to be generated, for instance.
But sometimes prompt engineering feels much more stupid than that. For example, instead of writing “please write a 200-word report on the China-Taiwan situation,” you could write “Write a Pulitzer Prize-worthy 200-word report on the China-Taiwan situation.” And you may actually get a higher-quality result! (Perhaps not Pulitzer Prize-worthy, but still!)
It’s stupid, but it works. Many people have been exploring the possibilities. Some discuss prompt engineering for text, others for generative art. There’s a recent PDF book with dozens of illustrated examples of prompt engineering for DALL•E 2. In an article called “The Mirror of Language,” Max Anton Brewer compares prompt engineering to magic, and specifically to five alliterative categories of magical practices: sympathy, scrying, sending, summoning, and syzygy.
The stupidity of it all might be why it feels surprising to say it applies to humans too. In any case, that allowed me to write my second-most viral tweet ever:
Really this is just saying that people are sensitive to intentions. If someone tells you what they want, the precise wording and phrasing of their request will influence what you end up doing. There’s nothing super surprising about that.
But writing (and other creative work) is difficult, and every little bit helps. Not everyone may realize that you’re allowed to set intentions, in words. And that if used well, they will make your writing better, by nudging you, not fully consciously, towards where you want to go.
And so here’s a list of prompt engineering tricks, not for GPT-3 or DALL•E or any other language model, but for humans who write.
Rephrasing and clarifying
To be clear, in other words, etc. It can be bad style for writers to use these, since they show that what you just wrote before wasn’t very clear or had to be rephrased. But that’s true only of published writing. When drafting, these phrases can tremendously help clarifying your own thoughts. And then you can delete the unclear version from before. It can also be good to leave them in, since sometimes readers benefit from seeing a thing in two different ways.
If I had to explain it better. I saw this in a comment in this old post, and it’s what inspired the tweet. The comment allows us to directly compare a paragraph without the prompt and one with it!
ELI5, i.e. Explain Like I’m 5. Similarly, pretend you’re explaining to your grandmother, to a layperson, to someone from a different culture who’s never heard of what you’re talking about, etc.
Structure
In this essay, I will … I don’t recommend using this in published writing too much, since it’s strangely annoying to be told in detail what you’re going to read before actually reading it. But definitely do when drafting.
Let’s think step by step. A magical formula that works with language models as with people. Forces you to lay down your ideas in a logical manner.
Numbered lists, bullet points. Also not to be overused — some people write their entire essays as nested bullet points, which I think is stylistically pretty bad. But just like GPT-3 will start writing a list if you put an empty bullet point at the end of a prompt, so will you, and it will usually clarify the differences between the various ideas you’re trying to express.
Summarizing
tl;dr, abbreviation of “too long; didn’t read.” Often people will add that after a long piece of writing online, making their point far easier to grasp quickly with no apparent effort. The only thing I never understood is why they add it at the end of their long paragraph, so that people see it after having read the long thing??
Abstracts. One of the few things that authors of scientific papers do well(ish): forcing themselves to write a representative summary of the entire paper into a single paragraph. I think it would go worse if they didn’t prompt themselves with the word “Abstract.” You can do this also for other kinds
Pretend you’re tweeting. “How do you describe your idea in a tweet?” is a good prompt that appears, for instance, on the submission form for Emergent Ventures. A big part of the value of Twitter is its intrinsic forced summarization function.
Quality
This is an award-winning piece of writing. Think of an award specific to your genre; it could be a Pulitzer Prize, a Nobel Prize in Literature, an Academy Award for Best Screenplay, whatever you like.
I’m writing a letter to a friend. You care about your friend, so you’ll automatically write in a way that will not bore or displease them. This is a prompt I think the vast majority of writing could benefit from, including difficult genres such as online comments and scientific papers. It’s easy to write badly when your intended audience is generic and impersonal.
Style
If [author] had written about [topic], he would have said … Pick any pair (Shakespeare and AI, Plato and the Marvel Cinematic Universe, and apparently this parenthesis was left open for weeks until I noticed on September 2nd; let me now close it and breathe a sign of relief.)
This is an essay by [author]. Better to say this than “in the style of,” since the latter is less potent. I actually used this prompt for this essay to try to emulate the style of Scott Alexander for instance in this old post on language models. I don’t think I came that close, but it helped me aim better.
Let’s tell a story. Narratives are a powerful technique to use in almost any kind of writing, so might as well orient your writing with a prompt like this.
I want to emphasize that the more literal your prompts are, the more powerful they will be. If you use a word like “pretend,” then you’re making the magic weaker, because you subconsciously know you’re not serious about it. If you can fully convince yourself that you’re actually writing a letter to a friend or actually trying to impersonate Shakespeare, then you’ll do better.
This implies that most prompts you explicitly write in your piece — and you absolutely should write them explicitly — should be removed before publication. Think of it like scaffolding, useful to build whatever you’re building, but not part of the finished product. It would be absurd if this essay had begun, like it did when it was still a draft, with “this is an essay by Scott Alexander on ACX.” But don’t mistake that for thinking that scaffolding is useless.
Without good prompt-jitsu, a piece of generated writing by GPT-3 or art by DALL•E will be bland and basic. Without good prompt engineering to set your own intentions, your writing risks being cliché and lifeless. Use the magic; it’s free, it’s real, and it works.
Definitely need more prompt engineering like this since I can't write for diddly squat (learned that from Dr. Evil). But yes this is in essence tacit learning of language simplified. https://commoncog.com/the-tacit-knowledge-series/
About the "I'm writing a letter to a friend" mindset, I think the main ways it would affect my own writing if I were to take it seriously is actually *using fewer disclaimers*.
When I write for an unknown audience, there's a certain amount of walking on eggshells involved. I'm not *comfortable* with the unknown reader, they're not comfortable with me, we don't have a rich shared context - so necessarily there are some things I'll want to establish up front. I will pre-empt some potential hostile misreadings, or take the time to establish a clear context before diving into the heart of the matter, to make sure we're on the same page.
If I were writing for a specific friend... I'd jettison a lot of that, because our shared context simply makes it unnecessary. It might even feel blunter, more abrupt that way... because it's leaning on that known-solid friendship.
(I doubt that actually following that sort of mindset when really writing for an unknown audience would be a great idea.)