Skip to main content

Multimodal Answer Relevancy

The multimodal answer relevancy metric measures the quality of your Multimodal RAG pipeline's generator by evaluating how relevant the actual_output of your MLLM application is compared to the provided input. deepeval's multimodal answer relevancy metric is a self-explaining MLLM-Eval, meaning it outputs a reason for its metric score.

info

The Multimodal Answer Relevancy is the multimodal adaptation of DeepEval's answer relevancy metric. It accepts images in addition to text for the input and actual_output.

Required Arguments

To use the MultimodalAnswerRelevancyMetric, you'll have to provide the following arguments when creating a MLLMTestCase:

  • input
  • actual_output

Example

from deepeval import evaluate
from deepeval.metrics import MultimodalAnswerRelevancyMetric
from deepeval.test_case import MLLMTestCase, MLLMImage

metric = AnswerRelevancyMetric()
test_case = MLLMTestCase(
input=["Tell me about some landmarks in France"],
actual_output=[
"France is home to iconic landmarks like the Eiffel Tower in Paris.",
MLLMImage(...)
]
)

metric.measure(test_case)
print(metric.score)
print(metric.reason)

# or evaluate test cases in bulk
evaluate([test_case], [metric])

There are six optional parameters when creating an MultimodalAnswerRelevancyMetric:

  • [Optional] threshold: a float representing the minimum passing threshold, defaulted to 0.5.
  • [Optional] model: a string specifying which of OpenAI's Multimodal GPT models to use, OR any custom MLLM model of type DeepEvalBaseMLLM. Defaulted to 'gpt-4o'.
  • [Optional] include_reason: a boolean which when set to True, will include a reason for its evaluation score. Defaulted to True.
  • [Optional] strict_mode: a boolean which when set to True, enforces a binary metric score: 1 for perfection, 0 otherwise. It also overrides the current threshold and sets it to 1. Defaulted to False.
  • [Optional] async_mode: a boolean which when set to True, enables concurrent execution within the measure() method. Defaulted to True.
  • [Optional] verbose_mode: a boolean which when set to True, prints the intermediate steps used to calculate said metric to the console, as outlined in the How Is It Calculated section. Defaulted to False.

How Is It Calculated?

The MultimodalAnswerRelevancyMetric score is calculated according to the following equation:

Multimodal Answer Relevancy=Number of Relevant StatementsTotal Number of Statements\text{Multimodal Answer Relevancy} = \frac{\text{Number of Relevant Statements}}{\text{Total Number of Statements}}

The MultimodalAnswerRelevancyMetric first uses an LLM to extract all statements and images in the actual_output, before using the same MLLM to classify whether each statement and image is relevant to the input.

tip

You can set the verbose_mode of ANY deepeval metric to True to debug the measure() method:

...

metric = MultimodalAnswerRelevancyMetric(verbose_mode=True)
metric.measure(test_case)