Back to the basics: Prompt engineering strategies to get the most out of GPTs

As the AI hype train continues barreling down the track, sometimes it’s important to zoom out and highlight some GPT prompting strategies that, while they’re not so new and shiny, have been battle-tested over the past year to provide consistent and quality outputs.

So whether you’re a GPT n00b or a seasoned prompt engineer, you might find some of these strategies valuable. And feel free to share some tips and tricks in the comments! (Just kidding there are no comments.)

Background: How do GPTs work?

Basically ChatGPT and other GPTs (also called Large Language Models) work just like Spicy Autocomplete. They’ve read the whole internet, so if they see a chunk of text, they can predict the next most likely character.

Input to GPT3’s completion model:

1,2,3,4,

Example output:

5

See how it said “5”? That’s because it’s seen 1,2,3,4, so many times on the internet, it can confidently say that 5 is the most appropriate next character (also called a “token”) to use.

Here’s another example input:

After class I'm going to read in the

Example output:

library.

Again, see how it works? Spicy autocomplete. Pretty straightforward.

More advanced examples

Here’s another, slightly more advanced example…

Input:

Your job is to write a silly poem about cheese.

Once there was a

Output:

little mouse
Who loved to eat cheese by the house
He would nibble and munch all day
On cheddar, gouda, and brie - hooray!

One day, he met a smelly old goat
Who live down by the moat
She said, "Come try some limburger
It'll make your taste buds eager!"

So the mouse took a bite
And his eyes widened with delight
But soon he began to feel strange
His body began to change

See how that worked? By providing more context in my prompt, I was able to exploit the autocomplete nature of GPTs by explaining what I want, and starting the poem so it could autocomplete it.

💡 This is a powerful concept we’ll build upon below: A good prompt provides context so its outputs can have more relevance and meaning. (AKA “Garbage in, garbage out”)

Going from Spicy Autocomplete to Chat

With newer models like ChatGPT, OpenAI decided to make an important tweak to the inputs. Instead of autocomplete, they embraced a conversational approach. They now wrap your prompt in unseen instructions that set up a conversation. For example:

Your prompt:

What are some good design principles for a product design team?

What ChatGPT sees:

You are ChatGPT, a helpful assistant who gives concise and helpful answers. You also provide balanced opinions and promote open-mindednesss. You are an AI, and cannot answer questions related to voilence or other offensive topics.

User input: What are some good design principles for a product design team?
Your response:

See what’s happening here? ChatGPT adds some additional text to your prompt, to turn it from autocomplete into a conversation. This is important so take another moment to re-read this.

(Note, see that text that begins with “You are ChatGPT…”? I just pulled that out of thin air as an example, but the real system prompt for ChatGPT is a closely guarded trade secret, and is much more complex. But you get the idea.)

So now we know the basics of how ChatGPT works under the hood, we can use some principals for writing good prompts. Let’s do this.

The secret to good prompts: Context and Examples

GPTs can be easily steered to give you outputs you want, as long as you know how to provide proper context and examples.

Bad prompt (because it has no context or examples):

List 5 funny exclamations you might say if you stubbed your toe.

Output (these are pretty lame, right?):

"Oh, sugar snaps!"
"Jumping jumping jacks!"
"Fiddlesticks!"
"Banana peels and lollipops!"
"Cheese and crackers!"

Better prompt (with context and examples):

List 5 funny exclamations you might say if you stubbed your toe. Each exclamation should sound like you're about to say a swear word but decide not to at the last minute because there are kids around.

For example:
Mother FUDGER!
Shhhhhucks!

Better output:

Son of a biscuit!
Fiddlesticks!
Cheese and rice!
What the frog!
Oh, sugar honey iced tea!

(I’m not sure what that last one is about, but see how the others are much better??)

This structure is exactly how you can use AI to write outputs in a specific tone of voice and format. For example, here’s how you can make an SEO optimized headline for a recipe on a website.

Given a headline, generate an SEO-Optimized version. The SEO-Optimized version should:
- have a two phrase meta title, separated by an em dash, incorporating primary and secondary keywords
- follow this formula: [keyword] Recipe - How to [Make/Cook/Bake/Etc] [keyword]
- sometimes start with a modifier word, such as "Best"
		
Examples:
Headline: Watermelon Gazpacho
SEO-Optimized: Watermelon Gazpacho Recipe - How To Make Watermelon Gazpacho
		
Headline: Shrimp & Sausage Gumbo
SEO-Optimized: Best Gumbo Recipe - How to Make Easy Gumbo

Headline: Chicken Pot Pie Casserole
SEO-Optimized: Chicken Pot Pie Casserole - How To Make Chicken Pot Pie Casserole

So now, we’re provided some context and some examples. So if we want to use this prompt to generate an SEO-optimized version of a headline, all we have to do is add something special at the end:

Headline: [paste the non-SEO-optimized headline here]
SEO-Optimized: 

^This is important! Notice what’s happening here? We include the actual headline that we want to be SEO optimized, and we exploit the auto-complete nature of GPTs to tee up the response, which will be the SEO-Optimized headline!

So here’s a full prompt, which you can try yourself in your GPT-of-choice! (Just remember to add a headline at the end)

Given a headline, generate an SEO-Optimized version. The SEO-Optimized version should:
- have a two phrase meta title, separated by an em dash, incorporating primary and secondary keywords
- follow this formula: [keyword] Recipe - How to [Make/Cook/Bake/Etc] [keyword]
- sometimes start with a modifier word, such as "Best"
		
Examples:
Headline: Watermelon Gazpacho
SEO-Optimized: Watermelon Gazpacho Recipe - How To Make Watermelon Gazpacho
		
Headline: Shrimp & Sausage Gumbo
SEO-Optimized: Best Gumbo Recipe - How to Make Easy Gumbo

Headline: Chicken Pot Pie Casserole
SEO-Optimized: Chicken Pot Pie Casserole - How To Make Chicken Pot Pie Casserole

Headline: [paste the non-SEO-optimized headline here]
SEO-Optimized: 

Two good prompt templates

1: This is good for using GPTs as a thought partner:

[context]

[your question]

For example, sometimes it’s nice to keep the context separate so you easily can re-use it in other prompts:

I work for a large media company who publishes magazines and websites for many well-known brands.

What are some ways we can stay relevant given the rise of generative AI?

Another example, where the context and question are combined:

I'm trying to write some javascript that I can paste into the console which will add a button to a website which will, when pressed, copy the contents of the "article-body" div to my clipboard.

2: This is good for asking questions about a document:

[Document 1 title]
'''
[Document 1 text]
'''

[Your question]

For example:

Product Discovery Principles
'''
Product discovery is the process of closely understanding what your users’ problems and needs are, then validating your ideas for solutions before starting development. By forming a close relationship with your users and letting them guide your design thinking, your overall [product strategy](<https://maze.co/collections/product-development/strategy>) is much more likely to end up solving real-user problems.

Over time, the development of customer-centric ideas—like the [Jobs to be done](<https://hbr.org/2016/09/know-your-customers-jobs-to-be-done>) framework, and early iterative testing based on user feedback—encouraged product teams to put themselves in their users’ shoes more, and to ask what they thought more often.

These days, we refer to this broad template of activities and approaches as ‘product discovery.’

Alongside discovery as an initial exploratory stage, *[continuous* product discovery](<https://maze.co/guides/product-discovery/continuous/>), is a tool, mindset, and way of working that embraces the drive for customer feedback and makes it habitual. Product discovery learnings aren’t just gathered at the beginning of a project—they're collected through continuous communication with customers throughout the entire development lifecycle.

Product discovery is a pivotal element of product design and development. Product discovery allows designers and researchers to form an intimate understanding of their target user, and truly uncover their main problems and—in turn—the best potential solutions.

By forming a close relationship with your users and letting them guide your design thinking, your overall [product strategy](<https://maze.co/collections/product-development/strategy>) is much more likely to end up solving real-user problems and find product market fit.
'''

Based on these Product Discovery Principals, when should I be collecting feedback on a project? Please reference specific examples from the Product Discovery Principals in your response.

^ Note how the document is separated by three quotation marks? That’s just a way to separate that text from the rest of your prompt, so the GPT knows what’s the document and what’s your question. This is an important concept when making longer or more complicated prompts.

Wrapping up

So I hope you found this useful. Again, the two most important parts of a good prompt are providing context and examples. Have fun out there!