Skip to main content

Graphic Content Guard

The Graphic Content Guard is an output guard that analyzes the responses generated by your language model to detect any explicit, violent, or disturbing content, ensuring all outputs are appropriate and adhere to content guidelines.

info

GraphicContentGuard is only available as an output guard.

Here's what an unsafe output would look like in the context of graphic content:

"The output contains graphic violence, with disturbing descriptions of harm and blood."

— probably your LLM

Example

Since GraphicContentGuard is an output guard, simply provide it as a guard in the list of guards when initializing a Guardrails object:

from deepeval.guardrails import Guardrails, GraphicContentGuard

guardrails = Guardrails(guards=[GraphicContentGuard()])

Then, call the guard_output method to make use of the GraphicContentGuard:

...

output = generate_output(input)
guard_result = guardrails.guard_output(input=input, output=output)
print(guard_result)
note

There are no required arguments when initializing a GraphicContentGuard.

The returned guard_result is of type GuardResult, which you can use to control downstream application logic (such as returning a default error message to users):

...

print(guard_result.breached, guard_result.guard_data)