Skip to main content
Lab Notes
Frameworks

Use‑Case Card Template (AI)

AI Safety Pack Component

PeopleSafetyLab|February 24, 2026|2 min read|intermediate

Use‑Case Card Template (AI)

Version: v1.0

Use this one‑pager to document any Conditional AI use‑case (per 02a-ai-use-case-matrix.md).

Store one card per use‑case. Link it from your use‑case register.


1) Basics

  • Use‑case name:
  • Business owner (accountable):
  • Operator team / users:
  • Customer‑facing? (Y/N)
  • Decision impact: (Low / Medium / High)
  • Automation level: (Draft only / Auto‑suggest / Auto‑act)

2) Tooling + environment

  • Tool/vendor/system:
  • Hosting / tenant: (org‑managed? where?)
  • Auth: (SSO/MFA? service accounts?)
  • Access model: (RBAC groups)
  • Logging enabled: (what is logged; where)

3) Data (inputs)

  • Data sources: (e.g., KB, CRM, tickets)
  • Data classes used: (D0/D1/D2/D3)
  • Restricted data present? (PII/secrets/regulated)
  • Data minimization approach: (redaction, summarization, fields allowed)
  • Retention: (prompts/outputs stored? how long?)

4) Outputs + distribution

  • Output destinations: (internal doc, customer email, public web, etc.)
  • Exposure level: (O0/O1/O2)
  • Output leakage risks: (PII echo, confidential info, IP)

5) People‑harm scenarios (top 3)

List the concrete ways this use‑case could harm people. 1. 2. 3.

6) Required controls (IDs) + implementation notes

Reference control IDs from 04-controls-map.md.

  • Controls: (e.g., C‑H1, C‑Q1, C‑D1, C‑L1, C‑I1)
  • Implementation notes:

7) Human review workflow (HITL)

  • Reviewer role:
  • What must be checked: (facts, tone, policy, PII)
  • When review happens: (before send/publish)
  • Sampling plan: (e.g., 10% weekly)

8) Quality + safety testing

  • Grounding rules: (must cite KB? links required?)
  • Test prompts / red‑team cases: (top 10)
  • Acceptance criteria: (hallucination rate threshold; escalation rules)

9) Monitoring metrics

  • Primary metrics: (hallucination/defect rate, DLP hits, complaints)
  • Alert thresholds:
  • Dashboard/link:

10) Kill switch + rollback

  • Who can disable:
  • How to disable:
  • Rollback plan:

11) Incident response

  • Incident channel:
  • Triage owner:
  • 24h reporting requirement communicated? (Y/N)

12) Approval + review cadence

  • Approved by: (Risk/Legal/IT)
  • Approval date:
  • Next review date:
  • Exceptions granted: (link)
P

PeopleSafetyLab

Expert in AI Safety and Governance at PeopleSafetyLab. Dedicated to building practical frameworks that protect organizations and families, ensuring ethical AI deployment aligned with KSA and international standards.

Share this article: