Image Reference
The Image Reference metric evaluates how accurately images are referred to or explained by accompanying text. deepeval
's Image Reference metric is self-explaining within MLLM-Eval, meaning it provides a rationale for its assigned score.
Image Reference evaluates MLLM responses containing text accompanied by retrieved or generated images.
Required Arguments
To use the ImageReference
, you'll have to provide the following arguments when creating an MLLMTestCase
:
input
actual_output
Remember that the actual_output
of an MLLMTestCase
is a list of strings and Image
objects. If multiple images are provided in the actual output, The final score will be the average of each image's reference score.
Example
from deepeval import evaluate
from deepeval.metrics import ImageReferenceMetric
from deepeval.test_case import MLLMTestCase, MLLMImage
# Replace this with your actual MLLM application output
actual_output=[
"1. Take the sheet of paper and fold it lengthwise",
MLLMImage(url="./paper_plane_1", local=True),
"2. Unfold the paper. Fold the top left and right corners towards the center.",
MLLMImage(url="./paper_plane_2", local=True),
...
]
metric = ImageReferenceMetric(
threshold=0.7,
include_reason=True,
)
test_case = MLLMTestCase(
input=["Provide step-by-step instructions on how to fold a paper airplane."],
actual_output=actual_output,
)
metric.measure(test_case)
print(metric.score)
print(metric.reason)
# or evaluate test cases in bulk
evaluate([test_case], [metric])
There are five optional parameters when creating a ImageReferenceMetric
:
- [Optional]
threshold
: a float representing the minimum passing threshold, defaulted to 0.5. - [Optional]
strict_mode
: a boolean which when set toTrue
, enforces a binary metric score: 1 for perfection, 0 otherwise. It also overrides the current threshold and sets it to 1. Defaulted toFalse
. - [Optional]
async_mode
: a boolean which when set toTrue
, enables concurrent execution within themeasure()
method. Defaulted toTrue
. - [Optional]
verbose_mode
: a boolean which when set toTrue
, prints the intermediate steps used to calculate said metric to the console, as outlined in the How Is It Calculated section. Defaulted toFalse
. - [Optional]
max_context_size
: a number representing the maximum number of characters in each context, as outlined in the How Is It Calculated section. Defaulted toNone
.
How Is It Calculated?
The ImageReference
score is calculated as follows:
- Individual Image Reference: Each image's reference score is based on the text directly above and below the image, limited by a
max_context_size
in characters. Ifmax_context_size
is not supplied, all available text is used. The equation can be expressed as:
- Final Score: The overall
ImageReference
score is the average of all individual image reference scores for each image: