Cleaning
The first step is to break down the given prompt into a sequence of simpler prompts that are easier to process.
def cleaned(llm, query):
template = """
You will be given a prompt to be passed to an llm model. Format it in the following manner:
---
##Instructions
1. Correct any spelling and grammatical (punctuation, etc) errors.
2. If the prompt is complex, break it into simpler parts, such that each sub-task can be processed in one go by an SQL query or mathematical computation, and rewrite them in a numbered list.
3. You are also given the history of the chat session, i.e., previous messages between the human (user) and system (model). Use this as context for cleaning the current prompt.
<more instructions>
---
---
<Examples>
---
---
##Prompt
{prompt}
---
"""
prompt = PromptTemplate(input_variables=["prompt"], template=template)
return LLMChain(llm=llm, prompt=prompt).run(query)
The chat history until this point is passed to provide context for cleaning the prompt.
The llm model in this case is one with temperature 0, since it is supposed to return only the cleaned series of instructions and nothing else.
The series of instructions are then broken down into individual prompts which are passed to the next stage.
tasks = cleaned(llm, query, "\n".join(history)).strip().split("\n")
tasks = [task.split(". ")[1] for task in tasks if len(task.split(". ")) > 1]