Human in the Loop: Building Safety Controls into AI Systems

Understanding Human in the Loop (HITL)

Human in the Loop (HITL) is a critical safety pattern in AI systems where certain actions require explicit human approval before execution. This approach is particularly important when dealing with actions that are:

Implementation Patterns

There are several ways to implement HITL in AI systems, each with its own trade-offs:

Synchronous Approval

This is what your code implements - a direct pause in execution waiting for human approval. When the agent attempts to generate an image, it stops and waits for explicit user confirmation before proceeding.

The flow typically looks like this:

  1. Agent decides to use a protected tool
  2. System pauses execution
  3. Human reviews and approves/denies
  4. System either executes the action or returns a denial message
  5. Agent continues with the conversation

Asynchronous Queue

For longer-running processes or systems with multiple approvers, an asynchronous approach might be better: