Prompt Versioning
Prompt versioning refers to the process of curating and developing differnt prompt templates in a managed and versionable way, and enables you to better test, compare, and evaluate the performance of different prompts.
Without prompt versioning, you would be keeping prompts in code, GitHub repos, or in CSV files where tracking changes and performance of each prompt quickly becomes unmanageable as the number of prompts increases.
How Does Prompt Versioning Work?
As a quick overview, here's the typical prompt versioning workflow you'll adopt when using Confident AI:
- Create a prompt on Confident AI in Hyperparameters > Prompts.
- Pull your prompt template from Confident AI, like how you would pull a GitHub repo.
- Save the prompt in memory, and interpolate it dynamically with variables each time you need to feed it to your LLM application.
Let's assume we've already created a prompt on Confident AI with the alias "My First Prompt"
and created a first version for it:
"Hi! The name's {{ first_name }}. {{ first_name }} {{ last_name }}."
To use the latest (and first) prompt version for the "My First Prompt"
, simply pull the prompt template by supplying the alias:
from deepeval.prompt import Prompt
prompt = Prompt(alias="My First Prompt")
prompt.pull()
Finally, when it comes time to use your prompt template, simply interploate it with the variables you've defined in double curly brackets ({{ variable_name }}
):
...
prompt_to_my_llm = prompt.interpolate(first_name="Joe", last_name="Biden")
print(prompt_to_my_llm)
Which gives this final prompt that is ready to be passed into your LLM:
"Hi! The name's Joe. Joe Biden."
Let's walkthrough each step in detail in the following sections.
Create Your First Prompt
On Confident AI, a prompt is different from a prompt version. While a prompt version contains the actual text used in your prompt template, a prompt is merely a namespace to group together different versions. Your project can have multiple prompts, and each prompt can have multiple versions.
To create your first prompt, simply head to Hyperparameters > Prompt and provide an alias as an unique identifier for your prompt.
You can include variables in your prompt by wrapping them inside a double curly bracket {{ variable_name }}
. The variable name can be in snake case, camel case, or really any case as long as:
- There are no spaces in the variable name.
- There is exactly one space between the opening curly bracket pair and the variable name, and the closing curly bracket pair and the variable name.
For example, here are some invalid syntax that wouldn't cause an error, but would mean you wouldn't be able to interpolate variables as you'll learn later:
# ❌ These won't be interpolated properly
"Hi my name is {{ variable name }}."
"Hi my name is {{ variable_name }}."
Confident AI will display what variables you've defined as you're editing your prompt template on the cloud.
Version Your First Prompt
Once you're created your first prompt, create a new version for it. For those wondering, it is the act of creating a prompt version that allows you to define the text used in your prompt.
Each time you create a new version, Confident AI will automatically create a version number that would allow you to define which version to pull from Confident AI in your LLM application.
Pull Your Prompt From Confident AI
You should pull your prompt at application start to avoid unnecessary latencies.
Pulling your prompt from Confident AI allows you to use it in your LLM application.
from deepeval.prompt import Prompt
prompt = Prompt(alias="My First Prompt")
# You can find all the available versions for your prompt on Confident AI
prompt.pull(version="00.00.01")
When you don't supply a version
, deepeval
will automatically pull the latest version available. The prompt you pull will be the raw, unprocessed prompt you've defined on Confident AI. Here's an example of what the prompt template could look like:
"Hi! The name's {{ first_name }}. {{ first_name }} {{ last_name }}."
You can access the raw prompt template and version pulled as follows:
...
print(prompt.value, prompt.version)
Interpolate Your Prompt With Variables
Although you can technically use the prompt template as it is defined on Confident AI, you'll most likely want to include dynamic variables at runtime. To interpolate values, simply use the interpolate()
method with the varirables supplied:
...
print(prompt.interpolate(first_name="Joe", last_name="Biden"))
The interpolate()
method will replace all variables found within double curly brackets with the variables you've supplied. The names of the variables MUST match exactly for this to work.
Calling the interpolate()
method does not change the raw prompt template value
stored in a Prompt
instance.
Using Your Prompt
Now that you've interpolated your prompt template with dynamic variables, you can go ahead and supply it to your LLM.