Task Completion
The task completion metric uses LLM-as-a-judge to evaluate how effectively an LLM agent accomplishes a task as outlined in the input
, based on tools_called
and the actual_output
of the agent. deepeval
's task completion metric is a self-explaining LLM-Eval, meaning it outputs a reason for its metric score.
TaskCompletion
is an agentic metric, and is specifically for evaluating tool-calling LLM agents. To see why each test case parameter is necessary in calculating the TaskCompletion
score, see how is it calculated.
Required Arguments
To use the TaskCompletion
, you'll have to provide the following arguments when creating an LLMTestCase
:
input
actual_output
tools_called
The input
and actual_output
are required to create an LLMTestCase
(and hence required by all metrics) even though they might not be used for metric calculation. Read the How Is It Calculated section below to learn more.
Example
from deepeval import evaluate
from deepeval.test_case import LLMTestCase
from deepeval.metrics import TaskCompletionMetric
metric = TaskCompletionMetric(
threshold=0.7,
model="gpt-4o",
include_reason=True
)
test_case = LLMTestCase(
input="Plan a 3-day itinerary for Paris with cultural landmarks and local cuisine.",
actual_output=(
"Day 1: Eiffel Tower, dinner at Le Jules Verne. "
"Day 2: Louvre Museum, lunch at Angelina Paris. "
"Day 3: Montmartre, evening at a wine bar."
),
tools_called=[
ToolCall(
name="Itinerary Generator",
description="Creates travel plans based on destination and duration.",
input_parameters={"destination": "Paris", "days": 3},
output=[
"Day 1: Eiffel Tower, Le Jules Verne.",
"Day 2: Louvre Museum, Angelina Paris.",
"Day 3: Montmartre, wine bar.",
],
),
ToolCall(
name="Restaurant Finder",
description="Finds top restaurants in a city.",
input_parameters={"city": "Paris"},
output=["Le Jules Verne", "Angelina Paris", "local wine bars"],
),
],
)
# To run metric as a standalone
# metric.measure(test_case)
# print(metric.score, metric.reason)
evaluate(test_cases=[test_case], metrics=[metric])
There are SIX optional parameters when creating an TaskCompletionMetric
:
- [Optional]
threshold
: a float representing the minimum passing threshold, defaulted to 0.5. - [Optional]
model
: a string specifying which of OpenAI's GPT models to use, OR any custom LLM model of typeDeepEvalBaseLLM
. Defaulted to 'gpt-4o'. - [Optional]
include_reason
: a boolean which when set toTrue
, will include a reason for its evaluation score. Defaulted toTrue
. - [Optional]
strict_mode
: a boolean which when set toTrue
, enforces a binary metric score: 1 for perfection, 0 otherwise. It also overrides the current threshold and sets it to 1. Defaulted toFalse
. - [Optional]
async_mode
: a boolean which when set toTrue
, enables concurrent execution within themeasure()
method. Defaulted toTrue
. - [Optional]
verbose_mode
: a boolean which when set toTrue
, prints the intermediate steps used to calculate said metric to the console, as outlined in the How Is It Calculated section. Defaulted toFalse
.
As a standalone
You can also run the TaskCompletionMetric
on a single test case as a standalone, one-off execution.
...
metric.measure(test_case)
print(metric.score, metric.reason)
This is great for debugging or if you wish to build your own evaluation pipeline, but you will NOT get the benefits (testing reports, Confident AI platform) and all the optimizations (speed, caching, computation) the evaluate()
function or deepeval test run
offers.
How Is It Calculated?
The TaskCompletionMetric
score is calculated according to the following equation:
- Task and Outcome are extracted from the
input
,actual_output
, andtools_called
using an LLM. - The Alignment Score measures how well the outcome aligns with the task (or user-defined task), as judged by an LLM.

While the task is primarily derived from the input
and the outcome from the actual_output
, these parameters alone are insufficient to calculate the Task Completion Score. See below for details.
What Is Task?
The task represents the user’s goal or the action they want the agent to perform. The input
alone often lacks the specificity needed to determine the full intent. For example, the input "Can you help me recover?" is unclear—it could mean recovering an account, a file, or something else. However, if the agent calls a recovery API, this action provides the necessary context to identify the task as assisting with account recovery, which is why the task is extracted from the entire LLMTestCase
.
What Is Outcome?
The outcome refers to the agent’s actions in response to the user’s request. Like the task, the outcome cannot be derived from the actual_output
alone. For example, if a restaurant reservation agent replies with "Booked for tonight," it’s impossible to confirm if the user’s goal was met without additional information such as the restaurant name, time, and tools used. These test case details (especiallly tools_called
) are crucial to verify that the outcome aligns with the user’s intended task.