What is Prompt Engineering?
Prompt engineering is a new field in software development. This field involves optimizing prompts to efficiently use Large Language Model LLM for a wide variety of applications. There are many techniques used to optimize prompts that are found in research papers. In this blog we will be learning about prompt engineering and all the techniques to optimize your prompts for all the different use cases.

Jordan Wu
7 min·Posted

Table of Contents
What is Prompt Engineering?
Prompt engineering is a new discipline for developing and optimizing prompts to use Large Language Models LLMs efficiently. A LLM is a model that has been pre trained to understand natural language processing NLP and will give you a result. There are many techniques from research papers used to refine the result. With these knowledge developers will use prompting techniques to solve a wide range of common and complex tasks. At a high level you will send a prompt to a LLM and it will give you a result. The result is based on many factors and as the developer you would make sure the result is optimized for your task.
Large Language Model LLM Settings
LLM settings are configurable parameters that control how it responds to your prompt. The settings will be dependent on the model you are using and will vary between models. We will go over some of the common settings:
Temperature: Takes a numeric value that determines if the results should be deterministic or be more flexible by introducing more randomness. Lower values will give you more consistent results, while higher values will give you more diverse and creative results. You should select a value based on your desired trade-off between coherence and creativity for your task.
Top P: Takes a numeric value that is an alternative to temperature
using a different technique called nucleus sampling. Lower values will give you more confident results while a higher value will give you more diverse results.
The general recommendation is to alter temperature
or top_p
but not both.
Max Tokens: Takes a numeric value that determines the maximum number of tokens the model can generate. This is useful to control the cost of using the LLM as you pay per million of tokens. All LLM use a tokenizer, here's OpenAI Tokenizer.
Frequency Penalty: Takes a numeric value that determines how often a token is repeated by applying a penalty on the next token proportional to how many times that token already appeared in the results and prompt. The higher the frequency penalty, the less likely a word will appear again. This setting reduces the repetition of words in the results by giving tokens that appear more a higher penalty.
Presence Penalty: Takes a numeric value that determines how often a token is repeated. Also applies a penalty on repeated tokens and the penalty is the same for all repeated tokens. This setting prevents the model from repeating phrases too often in its results. This setting reduces the repetition of phrases in the results.
The general recommendation is to alter the frequency_penalty
or presence_penalty
but not both.
What is Prompting?
Prompting is the term used when a user includes instructions and context to a prompt that will be sent to an LLM to perform. The instructions would contain steps to performing a task and context would be additional information the model would need to perform the task. When designing the prompt it's best to keep it simple, break complex tasks into simpler subtasks, be very specific in the instructions on what is the desired outcome, and focus on telling the model what to do.
Prompting Techniques
Prompting Techniques are used to solve tasks by designing the prompt to have all the information it needs to perform the task correctly. There have been many research papers on these techniques and it is good to know what they are to better understand prompt engineering.
Zero-shot Prompting
Zero-shot prompting is a technique that directly instructs the model to perform a task without any additional examples and is great for simple tasks without reasoning.
Summarize the text delimited by triple quotes into a single sentence.
"""
Prompt engineering is a new discipline for developing and optimizing prompts to
use Large Language Models LLMs efficiently. A LLM is a model that has been
pretrained to understand natural language processing NLP and will give you a
result. There are many techniques from research papers used to refine the
result. With these knowledge developers will use prompting techniques to solve
a wide range of common and complex tasks. At a high level you will send a
prompt to a LLM and it will give you a result. The result is based on many
factors and as the developer you would make sure the result is optimized for
your task.
"""
Few-shot Prompting
Few-shot prompting is a technique that provides examples of how to perform a task to the LLM. This in-context learning leads to better performance and will generate better results. It's used for more complex tasks that have reasoning.
You will be provided with text delimited by triple quotes.
If it contains a sequence of instructions,
re-write those instructions in the following format:
Step 1 - ...
Step 2 - ...
...
Step N - ...
If the text does not contain a sequence of instructions,
then simply write "No steps provided."
"""
Making a cup of tea is easy! First, you need to get some
water boiling. While that's happening
grab a cup and put a tea bag in it. Once the water is
hot enough, just pour it over the tea bag.
Let it sit for a bit so the tea can steep. After a
for a few minutes, take out the tea bag. If you
like, you can add some sugar or milk to taste.
And that's it! You've got yourself a delicious
cup of tea to enjoy.
"""
Chain-of-Thought Prompting (CoT)
Chain-of-Thought (CoT) prompting enables complex reasoning capabilities through intermediate reasoning steps. To get better results you can add "Let's think step by step".
I went to the market and bought 10 apples. I gave 2 apples to the neighbor and
2 to the repairman. I then went and bought 5 more apples and ate 1. How many
apples did I remain with?
Let's think step by step.
Self-Consistency
A prompting engineering technique is called self-consistency. The idea is to sample multiple chain-of-thought prompting and use their results to select the most consistent answer. This helps to boost the performance of complex tasks involving arithmetic and commonsense reasoning.
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove
today. After they are done, there will be 21 trees. How many trees did the
grove workers plant today?
A: We start with 15 trees. Later we have 21 trees. The difference must be the
number of trees they planted. So, they must have planted 21 - 15 = 6 trees. The
answer is 6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars
are in the parking lot?
A: There are 3 cars in the parking lot already. 2 more arrive. Now there are 3
+ 2 = 5 cars. The answer is 5.
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many
pieces do they have left in total?
A: Leah had 32 chocolates and Leah’s sister had 42. That means there were
originally 32 + 42 = 74 chocolates. 35 have been eaten. So in total they still
have 74 - 35 = 39 chocolates. The answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12
lollipops. How many lollipops did Jason give to Denny?
A: Jason had 20 lollipops. Since he only has 12 now, he must have given the
rest to Denny. The number of lollipops he has given to Denny must have been 20
- 12 = 8 lollipops. The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and
dad. How many toys does he have now?
A: He has 5 toys. He got 2 from mom, so after that he has 5 + 2 = 7 toys. Then
he got 2 more from dad, so in total he has 7 + 2 = 9 toys. The answer is 9.
Q: There were nine computers in the server room. Five more computers were
installed each day, from Monday to Thursday. How many computers are now in the
server room?
A: There are 4 days from Monday to Thursday. 5 computers were added each day.
That means in total 4 * 5 = 20 computers were added. There were 9 computers in
the beginning, so now there are 9 + 20 = 29 computers. The answer is 29.
Q: Michael had 58 golf balls. On Tuesday, he lost 23 golf balls. On Wednesday,
he lost 2 more. How many golf balls did he have at the end of Wednesday?
A: Michael initially had 58 balls. He lost 23 on Tuesday, so after that he has
58 - 23 = 35 balls. On Wednesday he lost 2 more so now he has 35 - 2 = 33
balls. The answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she
have left?
A: She bought 5 bagels for $3 each. This means she spent $15. She has $8 left.
Q: When I was 6 my sister was half my age. Now I’m 70 how old is my sister?
A:
Generated Knowledge Prompting
LLMs continue to be improved and one popular technique includes the ability to incorporate knowledge or information to help the model make more accurate predictions. This is helpful for commonsense reasoning. Be sure to provide knowledge that is useful to get a more accurate result.
Question: Part of golf is trying to get a higher point total than others. Yes
or No?
Knowledge: The objective of golf is to play a set of holes in the least number
of strokes. A round of golf typically consists of 18 holes. Each hole is played
once in the round on a standard golf course. Each stroke is counted as one
point, and the total number of strokes is used to determine the winner of the
game.
Explain and Answer:
Prompt Chaining
To improve the reliability and performance of LLMs, one of the important prompt engineering techniques is to break tasks into its subtasks. Prompt chaining, where a task is split into subtasks with the idea to create a chain of prompt operations. Prompt chaining can be used in different scenarios that could involve several operations or transformations.
You are a helpful assistant. Your task is to help answer a question given in a
document. The first step is to extract quotes relevant to the question from the
document, delimited by ####. Please output the list of quotes using <quotes><
quotes>. Respond with "No relevant quotes found!" if no relevant quotes were
found.
####
{{document}}
####
The document
could be Prompt Engineering wikipedia. This would be text that is sent to the LLM resulting in the following result.
<quotes>
- Chain-of-thought (CoT) prompting[27]
- Generated knowledge prompting[37]
- Least-to-most prompting[38]
- Self-consistency decoding[39]
- Complexity-based prompting[41]
- Self-refine[42]
- Tree-of-thought prompting[43]
- Maieutic prompting[45]
- Directional-stimulus prompting[46]
- Textual inversion and embeddings[59]
- Using gradient descent to search for prompts[61][62][63][64]
- Prompt injection[65][66][67]
</quotes>
You can now use the result in another prompt to help answer questions about the document with the given quotes.
Given a set of relevant quotes (delimited by <quotes></quotes>) extracted from
a document and the original document (delimited by ####), please compose an
answer to the question. Ensure that the answer is accurate, has a friendly
tone, and sounds helpful.
####
{{document}}
####
<quotes>
- Chain-of-thought (CoT) prompting[27]
- Generated knowledge prompting[37]
- Least-to-most prompting[38]
- Self-consistency decoding[39]
- Complexity-based prompting[41]
- Self-refine[42]
- Tree-of-thought prompting[43]
- Maieutic prompting[45]
- Directional-stimulus prompting[46]
- Textual inversion and embeddings[59]
- Using gradient descent to search for prompts[61][62][63][64]
- Prompt injection[65][66][67]
</quotes>
Question:
More Advance Prompting Techniques
Prompt engineering is an evolving discipline and there are many advanced techniques to help drive a more accurate result. Check out Prompt Engineering Guide - Prompting Techniques for more information.
Summary
Prompt engineering is a new discipline in the world of LLMs. Many startups are utilizing the power of LLMs to help solve world problems by improving processes, reducing costs, and creating AI tools for users to use to be more productive. This blog goes over what is prompt engineering and some of the techniques used when designing prompts. If you want to learn more about optimizing prompt by using tactics for getting better results from LLMs. Check out OpenAI Prompt engineering.