The Complete Guide to 10+ Different Types of Prompts for LanguageModels

Rupak (Bob) Roy - II
13 min readAug 14, 2024

--

End-to-End Hands-On Guide with Examples: Mastering Few-Shot, CoT, ReAct, and 10+ Prompting Techniques also we will understand Zero-Shot Learning Vs zero-shot-react-description”

Hi everyone today we will look into and understand various ways to do prompt engineering from zero-shot to prompt chaining.

List of commonly used prompting techniques:

#1.Zero-Shot Learning,
#2.One-Shot Learning,
#3.Few Shot Learning,
#4.ReAct,
#5.Chain-of-Thoughts,
#6.Prompt Chaining, ,
#7.Negative Prompting,
#8.Hybrid Prompting
#9.Iterative Prompting
#10.Conditional Prompting
#11.Role Based Prompting etc.

Also, we will understand Zero-Shot Learning Vs zero-shot-react-description”

Khamrenga beel, Chandrapur, Assam
Khamrenga beel, Chandrapur, Assam

So let’s get started with our first one “Zero-shot Learning”

Likewise, we will initialize the LLM using Huggingface model API calls which provide better token limits than open ai.

First login to the Hugging face and generate the API key(Access Token)

Huggingface
#######################################################
#Step up the LLM Environment
#######################################################
from langchain_community.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain

##################################################
#Model API call
repo_id = "mistralai/Mistral-7B-Instruct-v0.2"
llm = HuggingFaceEndpoint(
repo_id=repo_id,
max_length=128,
temperature=0.5,
huggingfacehub_api_token= "hf_yourkey")

1. Zero-Shot Prompting

it is a forward approach where the prompt/user-input used to interact with the model won't contain any examples or instructions to format, extract or follow a template.


from langchain_core.output_parsers import StrOutputParser
parser = StrOutputParser() #string output formatting

from langchain.prompts import ChatPromptTemplate

first_prompt = ChatPromptTemplate.from_template(
"Write the answers best as can {query}.")

chain_one = LLMChain(llm=llm, prompt=first_prompt,
output_parser= parser)
t =chain_one.invoke("what are the advantages of using ChatPromptTemplate in llm")
#output is saved as dictionary

#New approach using LangChain Expression Language (LCEL)
chain_two = first_prompt | llm | parser
t2 =chain_one.invoke("what are the advantages of using ChatPromptTemplate in llm")

2. One-Shot Prompting

here the prompt/user query used to interact with the model will have a single example, template, or instructions to follow


################################################
# 2.One Shot Learning/Prompting ################
################################################

one_shot = ChatPromptTemplate.from_template(
"""
Question: {query}
Answer: Write the answers in 100 words in bullets.

""")

chain_one = LLMChain(llm=llm, prompt=one_shot,
output_parser= parser)
t =chain_one.invoke("what are the advantages of using One Shot Learning in llm")

3. Few Shot Prompting

here the prompt/user query used to interact with the model will have a few examples, templates, or instructions to follow


from langchain.prompts import PromptTemplate
from langchain import FewShotPromptTemplate


#already initiated above
llm = llm
#----------------------------------------------

example_template = """
Question: {query}
Response: {answer}
"""

example_prompt = PromptTemplate(
input_variables=["query", "answer"],
template=example_template)

#The previous original prompt can be divided into a prefix and suffix.
#The prefix consists of the instructions or context given to the model,
#while the suffix includes the user input and output indicator.

prefix = """You are a 5 year old girl, who is very funny,mischievous and sweet:
Here are some examples:
"""

suffix = """
Question: {userInput}
Response: """


examples = [
{
"query": "What is a mobile?",
"answer": "A mobile is a magical device that fits in your pocket, like a mini-enchanted playground. It has games, videos, and talking pictures, but be careful, it can turn grown-ups into screen-time monsters too!"
}, {
"query": "What are your dreams?",
"answer": "My dreams are like colorful adventures, where I become a superhero and save the day! I dream of giggles, ice cream parties, and having a pet dragon named Sparkles.."
}
]

few_shot_prompt_template = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix=prefix,
suffix=suffix,
input_variables=["userInput"],
example_separator="\n\n"
)

query = "what is a house?"

print(few_shot_prompt_template.format(userInput=query))

print(llm.invoke(few_shot_prompt_template.format(userInput=query)))

#Here we will observe LLM is responding as a 5 year old girl as set in the prefix
query = "who is the president of india?"
print(llm.invoke(few_shot_prompt_template.format(userInput=query)))
#output
#Question: who is the president of india?
#Response:
#Oh, that's a grown-up question! But if I have to guess, I'd say it's the friendly man in the big house with a beautiful garden, who makes sure everyone in India is happy and safe.
query = “who is the president of india?”

We can observe one thing for sure that it is answering as a 5-year-old girl, who is very funny, and mischievous.

Now let’s try out LengthBasedExampleSelector is a tool used to efficiently select examples that fit within the length constraints of a language model’s context window, making it a valuable component for prompt engineering and context management.


######################################################
#Few Shot Prompt with LengthBasedExampleSelector #####
######################################################

#the LengthBasedExampleSelector is a tool used to efficiently select examples that fit within
#the length constraints of a language model’s context window, making it a valuable component for
#prompt engineering and context management.

examples = [
{
"query": "What is a mobile?",
"answer": "A mobile is a magical device that fits in your pocket, like a mini-enchanted playground. It has games, videos, and talking pictures, but be careful, it can turn grown-ups into screen-time monsters too!"
}, {
"query": "What are your dreams?",
"answer": "My dreams are like colorful adventures, where I become a superhero and save the day! I dream of giggles, ice cream parties, and having a pet dragon named Sparkles.."
}, {
"query": " What are your ambitions?",
"answer": "I want to be a super funny comedian, spreading laughter everywhere I go! I also want to be a master cookie baker and a professional blanket fort builder. Being mischievous and sweet is just my bonus superpower!"
}, {
"query": "What happens when you get sick?",
"answer": "When I get sick, it's like a sneaky monster visits. I feel tired, sniffly, and need lots of cuddles. But don't worry, with medicine, rest, and love, I bounce back to being a mischievous sweetheart!"
}, {
"query": "WHow much do you love your dad?",
"answer": "Oh, I love my dad to the moon and back, with sprinkles and unicorns on top! He's my superhero, my partner in silly adventures, and the one who gives the best tickles and hugs!"
}, {
"query": "Tell me about your friend?",
"answer": "My friend is like a sunshine rainbow! We laugh, play, and have magical parties together. They always listen, share their toys, and make me feel special. Friendship is the best adventure!"
}, {
"query": "What math means to you?",
"answer": "Math is like a puzzle game, full of numbers and shapes. It helps me count my toys, build towers, and share treats equally. It's fun and makes my brain sparkle!"
}, {
"query": "What is your fear?",
"answer": "Sometimes I'm scared of thunderstorms and monsters under my bed. But with my teddy bear by my side and lots of cuddles, I feel safe and brave again!"
}
]

from langchain.prompts.example_selector import LengthBasedExampleSelector

example_selector = LengthBasedExampleSelector(
examples=examples,
example_prompt=example_prompt,
max_length=200
)


new_prompt_template = FewShotPromptTemplate(
example_selector=example_selector, # use example_selector instead of examples
example_prompt=example_prompt,
prefix=prefix,
suffix=suffix,
input_variables=["userInput"],
example_separator="\n"
)

query = "What is a house?"
print(new_prompt_template.format(userInput=query))

print(llm.invoke(new_prompt_template.format(userInput=query)))

query = "What is a LengthBasedExampleSelector in llm?"
print(llm.invoke(new_prompt_template.format(userInput=query)))
#output
#A LengthBasedExampleSelector in llm is a magical tool that helps grown-ups find just the right example for their learning. It's like a super picky librarian, only instead of books, it selects examples based on their length. But don't ask me why grown-ups need such a fancy tool, I'm just a 5-year-old, I'm still learning the alphabet!

Explanation of LengthBasedExampleSelector from a 5-year-old girl who is very funny, and mischievous.

print(llm.invoke(new_prompt_template.format(userInput=query)))

There are other selectors that we can try namely
#SemanticSimilarityExampleSelector
#MaxMarginalRelevanceExampleSelector

3. Chain of Thoughts (COT)

It is a technique that improves the performance of language models by explicitly prompting the model to generate a step-by-step explanation or reasoning process before arriving at a final answer.


cot = ChatPromptTemplate.from_template(
"""
Question: {query}
Answer: The Task requires the following action:
1- Introduce the location of the subject
2- Give some history in brief
3- Mention what is it famous for

And also write the answers in 100 words in bullets.
""")

chain_cot = LLMChain(llm=llm, prompt=cot)

t =chain_cot.invoke("What is Ellora")
chain_cot.invoke(“What is Ellora”)

It has generated a step-by-step explanation from the template.

4.ReAct Prompting (Reasoning + Acting)

It is the replication of how a person thinks by using the ‘reasoning’ and ‘acting’ capabilities of our brain


from langchain.agents import load_tools
from langchain.agents import Tool,tool
from langchain.agents import AgentExecutor, create_react_agent
from langchain.prompts import PromptTemplate

#load existing tool from langchain.agents import load_tools
tools = load_tools(["llm-math",], llm=llm)

#create new tool
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word)

get_word_length.invoke("abc")
#output = 3

tools_new = [get_word_length]
tools_new.append(tools[0])#adding tool 1 from above

tools_new[0].name, tools_new[0].description
#('get_word_length', 'Returns the length of a word.')
tools_new[1].name, tools_new[1].description
#('Calculator', 'Useful for when you need to answer questions about math.')

#So we have two tools get_word_length & calculator
react_prompt = PromptTemplate.from_template(

"""Answer the following questions as best you can. You have access to the following tools:

{tools}
{chat_history}
Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: summarise

Observation: the result of the action

... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin!

Question: {input}
Thought:{agent_scratchpad}
"""
)

#Adding memory -----------------------------
from langchain.memory import ConversationBufferWindowMemory
memory2 = ConversationBufferWindowMemory(memory_key="chat_history",
k=3,return_messages = True)

#Reintializing llm without max_length as AgentExecutor cant identify Max_length
repo_id = "mistralai/Mistral-7B-Instruct-v0.2"
llm = HuggingFaceEndpoint(
repo_id=repo_id,
#max_length=128,
temperature=0.5,
huggingfacehub_api_token= "hf_yourkey")


agent = create_react_agent(llm, tools_new, react_prompt)
#create_react_agent is additional function that uses ReAct prompting, Dont get confused with react_prompt
#Based on paper “ReAct: Synergizing Reasoning and Acting in Language Models” (https://arxiv.org/abs/2210.03629)

agent_executor = AgentExecutor(
agent=agent, #defing the create_react_act() instead of AgentType.CONVERSATIONAL_REACT_DESCRIPTION
tools=tools,
verbose=True,
max_iterations=5,
memory = memory2,
#max_execution_time=max_execution_time,
#early_stopping_method=early_stopping_method,
handle_parsing_errors=True)

results3 = agent_executor.invoke({"input":"give me word count of september"})
agent_executor.invoke({“input”:” give me word count of september”})

we can clearly observe, it tries to find an appropriate tool (Action) to answer the query then it takes input (Action Input), and then Observation as mentioned in the ReAct prompt.

Zero-shot learning Vs zero-shot-react-description

We often get overwhelmed by two similar words. So lets understand the differences.

Zero-Shot React Description and Zero-Shot Prompting are techniques in the context of using language models like GPT-3/4. Both involve leveraging models for generating responses or performing tasks without extensive pre-training or specific examples. However, they differ in their approach and application.

Zero-shot prompting refers to the ability to get a model to perform a task or answer a question without needing any specific examples or additional training on that particular task. Instead, you provide a direct prompt or query to the model, which uses its general knowledge to generate a response.

How It Works:
Direct Prompt: You give the model a prompt that clearly defines the task or question. For instance, if you want the model to summarize a text, you simply ask it to “Summarize the following text.”
General Knowledge: The model relies on its pre-existing knowledge and training to understand and respond to the prompt.

Example:

Prompt: “Explain the concept of blockchain technology.”
Model Response: The model uses its understanding of blockchain technology based on its training data to generate an informative response.

Applications:

Q&A: Answering questions without specific examples.
Summarization: Summarizing texts or documents.
Translation: Translating text from one language to another.

Advantages:

Flexibility: Can be applied to a wide range of tasks without needing task-specific training.
Convenience: No need for task-specific data or examples.
Limitations:
Accuracy: Responses depend on the model’s general knowledge and might not be as precise as when fine-tuned with specific examples.
Context Understanding: The model might misinterpret prompts if they are not clearly stated.

Zero-Shot React Description is a more specific technique used within certain frameworks or models, like React (an interactive storytelling model). It involves generating responses or descriptions based on a given scenario or task without any explicit examples of how to perform that task.

How It Works:

Scenario-Based: The model is given a description or scenario and is expected to react or generate a description based on its understanding.
Contextual Generation: Unlike simple zero-shot prompting, this might involve more complex interactions where the model’s response is generated based on its ability to understand and react to the provided scenario.
Example:
Scenario Description: “You are designing a new app feature that helps users track their fitness goals. Describe how you would approach this design in a way that encourages user engagement.”
Model Response: The model generates a detailed description of the approach to designing the feature, considering user engagement strategies.

Applications:
Creative Writing: Generating descriptions or narratives based on a given context.
Scenario Planning: Outlining strategies or reactions based on hypothetical scenarios.
Interactive Systems: Creating responses or actions in interactive applications where context is provided.

Advantages:

Context Awareness: Often tailored to more specific scenarios, which can make responses more relevant and detailed.
Creative and Complex Tasks: Useful for tasks that involve creativity or complex reasoning based on provided contexts.
Limitations:
Complexity: Might require more nuanced understanding and description, which can be challenging for models if the scenario is highly specific or complex.
Dependence on Scenario: Effectiveness depends on how well the scenario is described and understood by the model.
Comparison
Flexibility vs. Specificity: Zero-Shot Prompting is more flexible and can be used for a broad range of tasks with simple prompts, while Zero-Shot React Description is more specific and suited for detailed, contextual responses based on provided scenarios.
Application Scope: Zero-Shot Prompting is often used for straightforward tasks like answering questions or summarizing, whereas Zero-Shot React Description is typically applied in more complex or creative scenarios where context is key.

In summary, while both methods leverage a model’s pre-existing knowledge without needing specific training data, Zero-Shot Prompting is more general and versatile, while Zero-Shot React Description focuses on generating detailed responses based on specific scenarios or contexts.

4. Prompt Chaining

refers to a technique where multiple prompts are used sequentially to guide the model’s output through a series of steps or stages. This method allows for the decomposition of complex tasks into simpler, manageable sub-tasks, with each step building on the results of the previous one. Here’s how it works and why it’s useful:

How Prompt Chaining Works

  • Sequential Prompts: The process begins with an initial prompt that generates a specific output. This output is then fed into a subsequent prompt, which further processes or refines the result. This chaining continues until the desired final output is achieved.
  • Decomposing Tasks: For example, if the task is to write a detailed article, the first prompt might generate an outline, the second prompt expands each section of the outline, and a third prompt could revise the content for clarity and coherence.

Advantages of Prompt Chaining

  • Improved Accuracy: By breaking down the task, prompt chaining can lead to more accurate and detailed outputs, as the model can focus on one sub-task at a time.
  • Error Reduction: It reduces the likelihood of errors or misinterpretations that can occur when attempting to generate complex outputs in a single step.
from langchain.prompts import ChatPromptTemplate
from langchain.chains import SimpleSequentialChain
from langchain_core.output_parsers import StrOutputParser

output_parser=StrOutputParser()

first_prompt = ChatPromptTemplate.from_template(
"what are token limits of {product}?")
chain_one = LLMChain(llm=llm, prompt=first_prompt,output_parser= output_parser)

second_prompt = ChatPromptTemplate.from_template(
"Write a 20 words description for the following company:{company_name}")
chain_two = LLMChain(llm=llm, prompt=second_prompt, output_parser= output_parser)

simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],verbose=True)

t = simple_chain.invoke("misterial.ai")
print(t.values)

Check out for more examples on complex prompt chaining in my previous articles.

Absolutely, there are several advanced prompting techniques that can enhance how language models are utilized for various tasks. Here’s a generalized overview of some additional prompting techniques, including Negative Prompting, Hybrid Prompting, and Iterative Prompting:

#Negative Prompting involves directing a language model to avoid certain types of content or directions in its response. It helps refine the output by specifying what should not be included.

How It Works:
Explicit Exclusions: Clearly state what should be avoided to control the model’s response.
Controlled Responses: Ensures the model stays on track and avoids unwanted or irrelevant content.
Example:
Prompt: “Explain renewable energy, avoiding technical jargon or complex terms.”
Response: The model delivers a simplified explanation of renewable energy, free from technical details.

#Iterative Prompting involves improving a model’s response through a series of prompts, gradually refining the output by adding more context or feedback.

How It Works:
Refinement Process: Begin with an initial prompt and use the model’s responses to guide follow-up prompts for more detail or accuracy.
Feedback Loop: Continuously adjust prompts based on previous responses to enhance the final output.
Example:
Initial Prompt: “Describe the key features of electric vehicles.”
Follow-Up Prompt: “Expand on the environmental benefits of electric vehicles.”
Final Output: The model provides a detailed description of the environmental benefits, building on the initial response.

#Conditional Prompting involves crafting prompts that include specific conditions or constraints for generating responses. This technique ensures that the model’s output adheres to particular requirements.

How It Works:
Set Conditions: Specify conditions or constraints within the prompt to guide the model’s response.
Targeted Responses: Achieve responses that meet the defined criteria.
Example:
Prompt: “Describe the benefits of exercise for mental health, focusing specifically on stress reduction and cognitive function.”
Response: The model generates a response that highlights the benefits related to stress reduction and cognitive function.

#Role-Based Prompting involves framing the prompt as if the model were assuming a specific role or perspective. This technique helps tailor the response based on the assumed role.

How It Works:
Assign a Role: Specify a role or perspective for the model to adopt in its response.
Contextual Output: Generate responses that align with the assigned role or perspective.
Example:
Prompt: “As a financial advisor, explain the importance of budgeting to a young professional.”
Response: The model provides advice on budgeting from the perspective of a financial advisor.

Thats it done, there are lot of prompting styles we can try to suit our requirements.

Try out with the Cheat Sheet.

Once again, thanks again for your time. i hope you enjoyed this. I tried my best to gather details across and simply as much as possible i could.

In the next article, we will explore LLM CALLBACKS

with LangChain that allows you to hook into the various stages of your LLM application.

Until then feel free to reach out. Thanks for your time, if you enjoyed this short article there are tons of topics in advanced analytics, data science, and machine learning available in my medium repo. https://medium.com/@bobrupakroy

Some of my alternative internet presences are Facebook, Instagram, Udemy, Blogger, Issuu, Slideshare, Scribd, and more.

Also available on Quora @ https://www.quora.com/profile/Rupak-Bob-Roy

Let me know if you need anything. Talk Soon.

Check out the links i hope it helps.

udemy: https://www.udemy.com/user/rupak-roy-2/
Water Hyacinth, Khamrenga beel, Chandrapur, Assam

--

--

Rupak (Bob) Roy - II

Things i write about frequently on Medium: Data Science, Machine Learning, Deep Learning, NLP and many other random topics of interest. ~ Let’s stay connected!