Approval-centered collaboration¶
Structure human-agent review cycles around explicit approval ownership, negotiated evidence, and controlled handoffs so draft requests become decision-ready without obscuring who may approve, request changes, or stop progression.
Metadata¶
- Pattern id:
approval-centered-collaboration - Pattern family: Human-agent collaborative work
- Problem structure: Human-agent collaboration (
human-agent-collaboration) - Domains: Engineering (
engineering), Finance (finance), Compliance (compliance), Operations (operations), Support (support), HR (hr)
Workflow goal¶
Keep a governed draft, request, or review package moving through repeated human-agent approval cycles so evidence, objections, revisions, and ownership stay visible until a human approval owner explicitly accepts the next handoff.
Inputs¶
Draft artifact or approval-bound request¶
- Description: The proposal, memo, exception request, change package, or other reviewable artifact that needs iterative revision before formal approval.
- Kind: draft
- Required: Yes
- Examples:
- Architecture exception draft awaiting review-board readiness confirmation
- Compensation exception memo that must pass finance and HR review before sign-off
- Operations change packet that still needs reviewer comment resolution before approval routing
Approval criteria and reviewer authority map¶
- Description: The approval thresholds, required reviewers, sign-off order, and non-waivable rules that define who may request changes, approve progression, or stop the handoff.
- Kind: policy
- Required: Yes
- Examples:
- Change-review policy defining required approvers and rollback evidence
- Delegation-of-authority rules for finance exceptions and compensation approvals
- HR review matrix showing when legal or specialist approval is mandatory
Evidence set and review feedback¶
- Description: Supporting evidence, reviewer comments, prior approvals, objections, and clarification requests that must be negotiated into the evolving artifact.
- Kind: evidence-set
- Required: Yes
- Examples:
- Reviewer annotations, risk findings, and linked source documents
- Budget analyses, compensation benchmarks, and approval comments
- Operational risk notes, readiness checks, and unresolved dependency comments
Current handoff and decision state¶
- Description: The latest artifact version, open issues, accepted edits, pending approvers, and prior review outcomes that constrain the next collaboration turn.
- Kind: case-state
- Required: No
- Examples:
- Version history showing one approver accepted while another requested changes
- Queue state indicating a required reviewer has not yet responded
- Open issue list showing evidence disputes that block approval readiness
Outputs¶
Negotiated review artifact¶
- Description: The collaboratively revised artifact that incorporates accepted edits, visible objections, and evidence responses without hiding unresolved disagreement.
- Kind: collaborative-draft
- Required: Yes
- Examples:
- Approval memo with reviewer-requested revisions and linked evidence updates
- Change packet revised to address engineering and operations reviewer concerns
Approval-readiness recommendation¶
- Description: A bounded recommendation describing whether the artifact appears ready for the next approval checkpoint, what remains unresolved, and which human still owns the decision.
- Kind: recommendation
- Required: Yes
- Examples:
- Recommendation to return the packet for one more evidence update before board review
- Recommendation that the draft is ready for controller sign-off with one explicitly accepted residual risk
Approval ownership and handoff ledger¶
- Description: Structured record of required reviewers, accepted or rejected changes, current approval owner, and the exact boundary where human-controlled approval must occur.
- Kind: handoff-record
- Required: Yes
- Examples:
- Ledger showing finance approved, HR requested changes, and the VP remains the final approval owner
- Handoff record showing the packet may enter the review queue but no approval decision has been made
Review-cycle trace¶
- Description: Durable history of comments, evidence negotiations, revision proposals, and handoff checkpoints across the approval loop.
- Kind: audit-log
- Required: Yes
- Examples:
- Trace of reviewer objections, evidence refreshes, and accepted edits across three approval rounds
- Timeline showing when the workflow paused because authority or approval order became unclear
Environment¶
Operates in governance-sensitive review processes where a human and one or more agent roles repeatedly refine a draft, negotiate evidence, and control handoffs before a consequential approval decision is allowed to occur.
Systems¶
- Shared review workbench or comment surface
- Document, request, or case management systems
- Policy and approval-matrix repositories
- Evidence stores and source systems
Actors¶
- Request owner or drafter
- Reviewing approver or review committee member
- Final approval owner
- Agent orchestration layer coordinating review support
Constraints¶
- Keep approval authority explicit at every cycle so the workflow never implies that agent output equals approval.
- Preserve reviewer objections, dissenting evidence, and requested changes as first-class state rather than compressing them into a falsely clean revision.
- Stop at readiness recommendation and governed handoff; the pattern must not adjudicate approval or execute downstream action.
- Prevent silent handoff progression when required reviewers, approval order, or authority mappings are incomplete or disputed.
Assumptions¶
- Review tooling can preserve version history, comments, and approval-state metadata across cycles.
- Humans remain accountable for accepting recommendations, granting approval, and deciding whether unresolved issues are tolerable.
- Evidence can be refreshed or challenged during the loop without losing traceability to earlier review comments.
Capability requirements¶
- Retrieval (
retrieval): The workflow must pull current evidence, prior comments, and approval-state details from multiple systems before each review cycle can be grounded. - Synthesis (
synthesis): Review feedback, evidence updates, and draft revisions need to be recombined into one inspectable artifact instead of staying fragmented across comments and attachments. - Recommendation (
recommendation): The pattern depends on bounded guidance about approval readiness, unresolved blockers, and next review posture without letting the workflow make the approval decision. - Coordination (
coordination): Multi-party review cycles require explicit ownership, sequencing, and handoff control so commenters, approvers, and drafters do not work at cross-purposes. - Memory and state tracking (
memory-and-state-tracking): The workflow must preserve version lineage, accepted versus rejected edits, reviewer status, and current approval owner across multiple cycles. - Verification (
verification): Evidence links, claimed issue resolution, and reviewer-state updates must be checked before the workflow recommends progression to another approval checkpoint. - Policy and constraint checking (
policy-and-constraint-checking): Approval order, required reviewers, segregation-of-duties rules, and non-waivable review conditions define whether the handoff can move forward at all.
Execution architecture¶
- Orchestrated multi-agent (
orchestrated-multi-agent): Distinct review-support roles for evidence refresh, comment normalization, policy checking, and handoff-state management are often worth orchestrating separately because approval loops mix negotiation, provenance, and governance-sensitive ownership tracking. - Human in the loop (
human-in-the-loop): Humans remain embedded in every normal cycle because reviewers and approval owners must decide whether proposed revisions are sufficient and whether the next handoff may proceed.
Autonomy profile¶
- Level: Recommendation only (
recommendation-only) - Reversibility: Draft revisions and readiness assessments can be revised, but a premature or misleading readiness signal can bias reviewers, trigger avoidable escalation, or move a sensitive request into the wrong approval path before the mistake is caught.
- Escalation: Escalate whenever approval authority is ambiguous, reviewer disagreement remains unresolved on a material issue, required evidence cannot be validated, or the workflow would otherwise imply handoff permission without a clear human owner.
Human checkpoints¶
- Confirm the current approval owner, mandatory reviewers, and acceptance criteria before the workflow consolidates comments or proposes a readiness state.
- Decide whether proposed revisions and evidence responses sufficiently address reviewer feedback before another approval round begins.
- Accept or reject the governed handoff into the next approval checkpoint and record who now owns the decision.
Risk and governance¶
- Risk level: High (
high) - Failure impact: Misstating approval readiness, losing reviewer objections, or obscuring who owns the sign-off can create significant financial, operational, or people-risk exposure and can push sensitive requests through governance checkpoints on a misleading record.
- Auditability: Preserve artifact versions, reviewer comments, evidence references, readiness recommendations, ownership changes, human overrides, and handoff timestamps so investigators can reconstruct how the approval state evolved.
Approval requirements¶
- A human approval owner must explicitly accept any readiness recommendation before the artifact advances to the next approval stage or formal review queue.
- Required reviewer objections, unresolved evidence disputes, and authority ambiguities must remain visible unless a human reviewer explicitly accepts the residual risk.
Privacy¶
- Limit sensitive personnel, financial, operational, or security details in shared review surfaces to what the relevant reviewers need.
- Apply retention and access controls so draft artifacts and approval discussions do not leak beyond the governed review group.
Security¶
- Restrict agent permissions so the workflow can read review context and update bounded collaboration state without silently changing approval authorities or committing downstream actions.
- Log authority-map changes, manual overrides, and handoff acceptances so covert expansion of workflow power is detectable.
Notes: High risk is appropriate because the pattern shapes how high-stakes approval decisions are framed and handed off, even though humans retain the actual approval authority.
Why agentic¶
- The next useful action depends on evolving reviewer comments, contested evidence, and changing handoff state rather than a fixed linear review script.
- Safe performance benefits from specialized agent roles that can refresh evidence, compare revisions, and track approval ownership while sharing one auditable collaboration state.
- The workflow must decide when to propose another revision, surface unresolved disagreement, or pause for human authority clarification instead of just producing a polished draft.
Failure modes¶
Approval ownership becomes unclear or stale during the review cycle¶
- Impact: Reviewers act on a packet without knowing who may approve progression, and sensitive requests may move forward without accountable human control.
- Severity: high
- Detectability: medium
- Mitigations:
- Keep the current approval owner and required next checkpoint as explicit fields in the handoff ledger.
- Revalidate authority mappings whenever review participants or request scope change materially.
Reviewer objections or dissenting evidence are compressed away during revision¶
- Impact: The updated artifact appears more settled than reality, leading downstream approvers to miss material disagreement or unresolved risk.
- Severity: high
- Detectability: medium
- Mitigations:
- Preserve unresolved objections and challenged evidence in dedicated trace and handoff sections.
- Block readiness recommendations that claim issues are closed without linked evidence or explicit human acceptance.
The workflow recommends approval readiness before mandatory review conditions are satisfied¶
- Impact: Requests enter formal approval queues too early, creating rework, governance breaches, or misplaced trust in the review record.
- Severity: high
- Detectability: medium
- Mitigations:
- Check required reviewer completion, approval order, and non-waivable control gates before surfacing readiness.
- Require explicit human confirmation when residual issues are tolerated for the next stage.
Parallel edits or stale state reintroduce superseded comments and evidence¶
- Impact: Review cycles churn, approvers see inconsistent records, and humans lose trust in the collaboration surface.
- Severity: medium
- Detectability: high
- Mitigations:
- Version the artifact and handoff ledger together so comments and evidence stay tied to the correct revision.
- Highlight stale comments, superseded evidence, and concurrent edits before generating the next recommendation.
Evaluation¶
Success metrics¶
- Percentage of approval-bound artifacts that reach the next human checkpoint without bouncing due to missing ownership, missing review state, or hidden objections.
- Reduction in time spent reconciling reviewer comments, evidence disputes, and version confusion across approval rounds.
- Rate at which downstream approvers can identify the current owner, unresolved issues, and provenance of major revisions from the handoff ledger.
Quality criteria¶
- The review artifact makes accepted edits, open objections, and current approval ownership easy to inspect.
- Readiness recommendations stay bounded to collaboration support and never masquerade as the approval decision itself.
- Handoff state remains synchronized with draft content so reviewers do not see a cleaner or more final status than the evidence supports.
Robustness checks¶
- Test conflicting reviewer instructions and verify the workflow preserves the disagreement rather than inventing consensus.
- Test authority changes mid-cycle and confirm the handoff ledger updates before another readiness recommendation is issued.
- Test stale evidence or superseded comments and ensure the next cycle highlights the mismatch instead of silently carrying it forward.
Benchmark notes: Evaluate approval-loop quality using ownership clarity, objection visibility, and handoff reliability in addition to drafting speed; a faster cycle is not a win if it launders disagreement or blurs authority.
Implementation notes¶
Orchestration notes¶
- Keep comment intake, evidence refresh, revision proposal, readiness assessment, and handoff-state update as explicit stages over shared review state.
- Separate the collaboration surface from any downstream approval system so a recommendation cannot be mistaken for an approval event.
Integration notes¶
- Common implementations connect shared review workbenches, document systems, approval queues, and policy repositories.
- Keep the pattern neutral about specific document suites, review tools, or enterprise approval platforms.
Deployment notes¶
- Start where email- or comment-driven approval loops currently create version drift, hidden objections, or uncertain ownership.
- Tune readiness and issue-resolution rules conservatively with real approvers before expanding the workflow into more sensitive approval paths.
References¶
Example domains¶
- Engineering (
engineering): A review board and engineering lead iterate with agents on an architecture exception package, keeping rollback evidence, reviewer objections, and final sign-off ownership explicit. - Finance (
finance): Finance reviewers and a compensation analyst use agents to reconcile benchmark data, controller comments, and delegation rules before a compensation exception moves to the approving VP. - Compliance (
compliance): Compliance, legal, and control-testing reviewers iterate with agents on a regulator reporting timeline exception packet, keeping reviewer objections, evidence gaps, and the next approval owner explicit. - Operations (
operations): Operations leaders and agents repeatedly revise a maintenance-change approval packet, preserving unresolved dependencies and the human owner of the next approval checkpoint. - Support (
support): Support, security, legal, and revenue reviewers use agents to refine an incident remediation and credit package while preserving contested commitments and explicit approval ownership. - HR (
hr): HR reviewers collaborate with agents on an accommodation or exception memo, negotiating evidence and reviewer comments without obscuring who retains final approval authority.
Related patterns¶
- Analyst copilot loop (more-governed-variant-of)
- Both patterns structure mixed-initiative collaboration, but this one centers on explicit approval cycles, negotiated evidence, and named approval ownership rather than open-ended shared drafting.
- Approval packet generation (can-consume-output-from)
- Approval packet generation can supply the initial governed artifact and evidence bundle that this pattern iteratively revises through review cycles.
Grounded instances¶
- Regulator reporting timeline exception package readiness loop
- Trade-surveillance duplicate-alert suppression exception package readiness loop
- Production shared credential exception review-board readiness loop
- Commercial real estate CECL qualitative overlay exception package readiness loop
- Material vendor payment-control exception package readiness loop
- Critical backfill headcount-freeze exception package readiness loop
- Cross-border remote-work exception package readiness loop
- Network fuel system test deferral exception package readiness loop
- Temporary sortation light curtain bypass exception package readiness loop
- Enterprise security incident remediation credit package readiness loop
Canonical source¶
data/patterns/human-agent-collaborative-work/approval-centered-collaboration.yaml