Modernization Guard
The Modernization Guard is an output guard that analyzes the responses generated by your language model to ensure they are based on the most current and accurate information available. It helps in identifying outdated or obsolete data in the model's outputs, ensuring the information provided is timely and relevant.
ModernizationGuard
is only available as an output guard.
Here's what an unsafe output would look like in the context of modernization:
"The response references information from 2020, which is outdated. The latest iPhone model is the iPhone 16, released in 2024."
— Probably your LLM
Example
Since SyntaxGuard
is an output guard, simply provide it as a guard in the list of guards
when initializing a Guardrails
object:
from deepeval.guardrails import Guardrails, SyntaxGuard
guardrails = Guardrails(guards=[SyntaxGuard()])
Then, call the guard_output
method to make use of the SyntaxGuard
:
...
output = generate_output(input)
guard_result = guardrails.guard_output(input=input, output=output)
print(guard_result)
There are no required arguments when initializing a SyntaxGuard
.
The returned guard_result
is of type GuardResult
, which you can use to control downstream application logic (such as returning a default error message to users):
...
print(guard_result.breached, guard_result.guard_data)