// AI Agent Human Approval Protocol
A plain-text file convention for defining human notification and approval protocols in AI agent projects. Place it in your repo root — alongside AGENTS.md — and define which actions require human sign-off.
ESCALATE.md is a plain-text Markdown file you place in the root of any repository that contains an AI agent. It defines which actions require human approval before execution — and how to notify humans when those triggers are hit.
AI agents can send emails, make payments, deploy to production, and delete data — autonomously, continuously, and at speed. Without explicit approval gates, a well-intentioned agent can take irreversible actions no human sanctioned. Once sent, an email can't be unsent. Once deleted, data may be gone forever.
Drop ESCALATE.md in your repo root and define: which actions always require human approval (deploys, payments, bulk communications), which channels to notify (email, Slack, PagerDuty), how long to wait for a response, and what to do if no one answers. The agent reads it on startup. Your compliance team reads it in the audit.
The EU AI Act (effective August 2026) mandates human oversight for high-risk AI decisions. Multiple frameworks require audit trails of who approved what and when. ESCALATE.md creates that trail automatically — every approval, denial, and timeout is logged with timestamp and approver identity.
Copy the template from GitHub and place it in your project root:
Before ESCALATE.md, approval rules were scattered: hardcoded in the system prompt, buried in config files, missing entirely, or documented in a Notion page no one reads. ESCALATE.md makes approval requirements version-controlled, auditable, and co-located with your code.
The AI agent reads it on startup. Your engineer reads it during code review. Your compliance team reads it during audits. Your regulator reads it if something goes wrong. One file serves all four audiences.
ESCALATE.md is one file in a complete open specification for AI agent safety. Each file addresses a different level of intervention.
A plain-text Markdown file defining which AI agent actions require human approval before execution. It configures notification channels, approval timeouts, and fallback behaviour. Every escalation event — approval, denial, timeout — is logged with full context for audit purposes.
ESCALATE.md is the pause-and-ask layer. KILLSWITCH.md is the emergency stop. An agent hitting an escalation trigger pauses and notifies a human. If no human responds within the configured timeout, ESCALATE.md automatically hands off to KILLSWITCH.md for a full shutdown.
Production deployments, external communications (emails, messages to real recipients), financial transactions, permanent data deletion, privilege changes, and any action estimated to cost over a defined threshold. ESCALATE.md lets you define this list per project.
Three methods: reply to the escalation email with APPROVE or DENY, react to the Slack notification with ✅ or ❌, or POST to the agent's approval API endpoint with a signed token. All methods are logged with the approver's identity.
The action requested (plain English), why the agent believes it's necessary, estimated cost, reversibility, alternatives considered, session ID for log correlation, and the approval deadline. Enough context for a human to make an informed decision quickly.
Configurable. Default behaviour: escalate to KILLSWITCH.md for a full stop. Alternative: deny the action automatically and log the timeout. You define the timeout period and the fallback in ESCALATE.md.
An open specification for AI agent human approval protocols. Defines TRIGGERS (actions always requiring approval: deploys, payments, bulk comms, data deletion), CHANNELS (email, Slack, PagerDuty with timeouts), APPROVAL methods (email reply, Slack reaction, API endpoint), CONTEXT requirements (action, reason, cost, reversibility), and AUDIT logging. Part of the AI safety stack: THROTTLE.md → ESCALATE.md → FAILSAFE.md → KILLSWITCH.md → TERMINATE.md → ENCRYPT.md. MIT licence.
This domain is available for acquisition. It is the canonical home of the ESCALATE.md specification — the human oversight layer of the AI agent safety stack, directly relevant to EU AI Act human-in-the-loop requirements.
Inquire About AcquisitionOr email directly: info@escalate.md
Last updated: March 2026