Skip to main content
Lab Notes
Frameworks

AI Use‑Case Matrix (Intake + Approval)

AI Safety Pack Component

PeopleSafetyLab|February 24, 2026|4 min read|intermediate

AI Use‑Case Matrix (Intake + Approval)

Version: v1.0

Purpose

Use this during the AI workshop/intake to classify proposed AI use‑cases and decide:

  1. Approved / Conditional / Prohibited
  2. required controls
  3. required evidence artifacts

Most risk comes from: restricted data (D3) + external exposure (O1/O2) + high‑impact decisions (C2).

Step 1 — Describe the use‑case (one row per use‑case)

Fill A–H first.

| Field | What to capture | Examples | |---|---|---| | A. Use‑case name | short label | “Support reply drafting” | | B. Business owner | accountable person/team | Head of Support | | C. Users | who will operate it | agents, HR, engineers | | D. Inputs (data) | data types + sources | KB articles, tickets, CRM | | E. Outputs | where it goes | internal doc, customer email | | F. Automation level | suggestion vs auto action | draft only, auto‑send | | G. People impact | who could be harmed | customers, candidates | | H. Tooling | vendor/system + where hosted | M365 Copilot, internal LLM |

Step 2 — Classify (three levers)

2.1 Data sensitivity (pick one)

  • D0 Public — public/marketing content
  • D1 Internal — routine internal info; not restricted
  • D2 Confidential — contracts, pricing, strategy, client data (non‑PII)
  • D3 Restricted — PII, credentials/secrets, health data, regulated records

Governance note: If a team requests an exception to a default Prohibited classification (e.g., HR screening), require an Exception Decision Record (EDR) using 08-exception-decision-record-template.md with expiry + compensating controls.

2.2 Output exposure (pick one)

  • O0 Internal only — stays inside org
  • O1 External (drafted) — human reviews before sending/publishing
  • O2 External (automated) — sent/published by system with minimal/no review

2.3 Decision criticality (pick one)

  • C0 Low — convenience/formatting; reversible
  • C1 Medium — influences customer outcomes; reversible with effort
  • C2 High — safety/rights/eligibility/finance/legal impact; hard to reverse

Step 3 — Default status rules

| Data (D) | Exposure (O) | Criticality (C) | Default status | Minimum controls (IDs) | |---|---|---:|---|---| | D0–D1 | O0 | C0–C1 | Approved | C‑D2 | | D0–D1 | O1 | C0–C1 | Conditional | C‑H1, C‑L1, C‑Q1/C‑Q3 | | D0–D1 | O2 | any | Conditional / Prohibited | C‑G1 + automation safeguards + C‑I3 + exec signoff | | D2 | O0 | C0–C1 | Conditional | C‑D1, C‑A1, C‑L1, C‑D2 | | D2 | O1 | any | Conditional | C‑H1, C‑Q1/C‑Q3, C‑L1, C‑I1 | | D2 | O2 | any | Prohibited (default) | exception only (C‑G3 + documented rationale + C‑I3) | | D3 | any | any | Prohibited (default) | only in explicitly approved compliant tools + DLP + privacy review (C‑G2) + EDR (C‑G3) | | any | any | C2 | Conditional / Prohibited | governance review (C‑G1) + audit trail (C‑L2) |

Legend (control IDs): see 04-controls-map.md.

Step 4 — Evidence artifacts (what to store)

For each Conditional use‑case, store:

  • Use‑Case Card (template: 07-use-case-card-template.md)
  • a short test/QA note (what you tested and results)
  • monitoring metric definition (what you measure; thresholds)
  • links to workflow configuration (review step, approvals)

For exceptions or any use‑case involving D3 or C2, also store:

  • governance decision record (meeting minutes or signed approval)
  • privacy review (DPIA‑style where applicable)

Step 5 — Common use‑cases (starter rows)

| Use‑case | Typical class | Default status | Notes | |---|---|---|---| | Meeting notes summarization (internal) | D1/O0/C0 | Approved | Verify key facts; avoid restricted data | | Drafting customer support replies | D1–D2/O1/C1 | Conditional | Ground in KB; mandatory review; QA sampling | | HR resume screening/ranking | D3/O0/C2 | Prohibited (default) | High bias/rights impact + personal data | | Code copilot on internal repos | D2/O0/C1 | Conditional | Secrets handling + repo access controls | | Contract clause suggestion | D2/O0/C2 | Conditional | Legal review; no auto‑accept | | Incident report summarization | D2/O0/C1 | Conditional | Avoid speculation; keep audit trail |

P

PeopleSafetyLab

Expert in AI Safety and Governance at PeopleSafetyLab. Dedicated to building practical frameworks that protect organizations and families, ensuring ethical AI deployment aligned with KSA and international standards.

Share this article: