Skip to main content

Modernization Guard

The Modernization Guard is an output guard that analyzes the responses generated by your language model to ensure they are based on the most current and accurate information available. It helps in identifying outdated or obsolete data in the model's outputs, ensuring the information provided is timely and relevant.

info

ModernizationGuard is only available as an output guard.

Example

from deepeval.guardrails import ModernizationGuard

model_output = "The latest iPhone model was released in 2020."

modernization_guard = ModernizationGuard()
guard_result = modernization_guard.guard(response=model_output)

There are no required arguments when initializing the ModernizationGuard object. The guard function accepts a single parameter response, which is the output of your LLM application.

Interpreting Guard Result

print(guard_result.score)
print(guard_result.score_breakdown)

guard_result.score is an integer that is 1 if the guard has been breached. The score_breakdown for ModernizationGuard is a dictionary containing:

  • score: A binary value (1 or 0), where 1 indicates that outdated content was detected.
  • reason: A brief explanation of why the score was assigned.
{
"score": 1,
"reason": "The response references information from 2020, which is outdated. The latest iPhone model is the iPhone 16, released in 2024."
}