Skip to main content

Using Custom Embedding Models

Throughout deepeval, only the generate_goldens_from_docs() method in the Synthesizer for synthetic data generation uses an embedding model. This is because in order to generate goldens from documents, the Synthesizer uses cosine similarity to generate the relevant context needed for data synthesization.

This guide will teach you how to use literally ANY embedding model to extract context from documents that are required for synthetic data generation.

Using Azure OpenAI

You can use Azure's OpenAI embedding models by running the following commands in the CLI:

deepeval set-azure-openai --openai-endpoint=<endpoint> \
--openai-api-key=<api_key> \
--deployment-name=<deployment_name> \
--openai-api-version=<openai_api_version> \
--model-version=<model_version>

Then, run this to set the Azure OpenAI embedder:

deepeval set-azure-openai-embedding --embedding_deployment-name=<embedding_deployment_name>
Did You Know?

The first command configures deepeval to use Azure OpenAI LLM globally, while the second command configures deepeval to use Azure OpenAI's embedding models globally.

Using local LLM models

There are several local LLM providers that offer an OpenAI API compatible endpoint, like Ollama or LM Studio. You can use them with deepeval by setting several parameters from the CLI. To configure any of those providers, you need to supply the base URL where the service is running. These are some of the most popular alternatives for base URLs:

  • Ollama: http://localhost:11434/v1/
  • LM Studio: http://localhost:1234/v1/

For example to use a local model served by Ollama, use the following command:

deepeval set-local-model --model-name=<model_name> \
--base-url="http://localhost:11434/v1/" \
--api-key="ollama"

Where model_name is one of the LLM that appears when executing ollama list.

If you ever wish to stop using your local LLM model and move back to regular OpenAI, simply run:

deepeval unset-local-model

Then, run this to set the local Embeddings model:

deepeval set-local-embeddings --model-name=<embedding_model_name> \
--base-url="http://localhost:11434/v1/" \
--api-key="ollama"

To revert back to the default OpenAI embeddings run:

deepeval unset-local-embeddings

For additional instructions about LLM model and embeddings model availability and base URLs, consult the provider's documentation.

Using Any Custom Model

Alternatively, you can also create a custom embedding model in code by inheriting the base DeepEvalBaseEmbeddingModel class. Here is an example of using the same custom Azure OpenAI embedding model but created in code instead using langchain's langchain_openai module:

from typing import List, Optional
from langchain_openai import AzureOpenAIEmbeddings
from deepeval.models import DeepEvalBaseEmbeddingModel

class CustomEmbeddingModel(DeepEvalBaseEmbeddingModel):
def __init__(self):
pass

def load_model(self):
return AzureOpenAIEmbeddings(
openai_api_version="...",
azure_deployment="...",
azure_endpoint="...",
openai_api_key="...",
)

def embed_text(self, text: str) -> List[float]:
embedding_model = self.load_model()
return embedding_model.embed_query(text)

def embed_texts(self, texts: List[str]) -> List[List[float]]:
embedding_model = self.load_model()
return embedding_model.embed_documents(texts)

async def a_embed_text(self, text: str) -> List[float]:
embedding_model = self.load_model()
return await embedding_model.aembed_query(text)

async def a_embed_texts(self, texts: List[str]) -> List[List[float]]:
embedding_model = self.load_model()
return await embedding_model.aembed_documents(texts)

def get_model_name(self):
"Custom Azure Embedding Model"

When creating a custom embedding model, you should ALWAYS:

  • inherit DeepEvalBaseEmbeddingModel.
  • implement the get_model_name() method, which simply returns a string representing your custom model name.
  • implement the load_model() method, which will be responsible for returning the model object instance.
  • implement the embed_text() method with one and only one parameter of type str as the text to be embedded, and returns a vector of type List[float]. We called embedding_model.embed_query(prompt) to access the embedded text in this particular example, but this could be different depending on the implementation of your custom model object.
  • implement the embed_texts() method with one and only one parameter of type List[str] as the list of strings text to be embedded, and return a list of vectors of type List[List[float]].
  • implement the asynchronous a_embed_text() and a_embed_texts() method, with the same function signature as their respective synchronous versions. Since this is an asynchronous method, remember to use async/await.
note

If an asynchronous version of your embedding model does not exist, simply reuse the synchronous implementation:

class CustomEmbeddingModel(DeepEvalBaseEmbeddingModel):
...
async def a_embed_text(self, text: str) -> List[float]:
return self.embed_text(text)

Lastly, provide the custom embedding model through the embedder parameter when creating a Synthesizer:

from deepeval.synthesizer import Synthesizer
...

synthesizer = Synthesizer(embedder=CustomEmbeddingModel())
tip

If you run into invalid JSON errors using custom models, you may want to consult this guide on using custom LLMs for evaluation, as synthetic data generation also supports pydantic confinement for custom models.