Skip to main content
Lab Notes
Frameworks

AI Use Policy

AI Safety Pack Component

PeopleSafetyLab|February 24, 2026|5 min read|intermediate

AI Use Policy (PeopleSafetyLab — AI Safety Pack)

Version: v1.0 Applies from: (date) Owner: (Risk / Security)

1) Purpose

This policy defines how employees and contractors may use AI systems (including LLM chatbots and copilots) to improve productivity without causing people harm or leaking sensitive data.

2) Scope

Applies to:

  • employees, contractors, interns
  • any AI tool used for work (approved or not)
  • all data processed via AI tools (inputs, prompts, uploads, and generated outputs)

3) Non‑negotiable principles

  1. Human accountability: AI output is a suggestion, not a decision.
  2. Protect people: do not use AI in ways that could cause physical, psychological, financial, legal, or dignity harm.
  3. Protect data: never put restricted data into unapproved tools.
  4. Be truthful: do not create deceptive or misleading content.
  5. Be transparent: disclose AI assistance where required by process, contract, or law.

4) Roles and responsibilities

  • Business Owner (Accountable): accountable for the use‑case and outcomes.
  • Tool Owner (IT/Security): approves tools, config, access, logging.
  • Data Owner / Privacy: approves which data classes may be used and under what conditions.
  • Risk/Compliance: approves high‑risk use‑cases and exceptions.
  • Users: follow this policy, complete training, report incidents.

5) Approved tools (and how to request approval)

5.1 Approved tools list

Fill in your organization’s approved tools here (tenant‑managed where possible):

  • (Tool 1)
  • (Tool 2)

5.2 Minimum approval requirements

Before adopting a new AI tool or enabling a new AI feature:

  • Security review: data residency/retention, access controls, logging
  • Contract review: sub‑processors, training on your data, breach terms
  • Data/Privacy review: allowed data classes, minimization, retention
  • Register: tool entered into an Approved Tools Register

6) Allowed uses (low‑risk examples)

Allowed when using approved tools and allowed data:

  • drafting internal emails and summaries
  • translation of non‑sensitive content
  • brainstorming/outlining and writing assistance
  • code assistance on non‑sensitive repos (no secrets)

7) Conditionally allowed uses (requires controls)

These are allowed only when the use‑case is classified via the matrix (02a-ai-use-case-matrix.md) and required controls are implemented (see 04-controls-map.md). Examples:

  • drafting customer support replies (human review + QA sampling)
  • internal data analysis on Confidential data (approved tool + access/logging)
  • code copilot on internal repos (secrets controls + access controls)

8) Prohibited uses (default)

Do not use AI to:

  • make final decisions on hiring, firing, promotion, compensation, or discipline
  • rank candidates using personal data (unless explicitly approved via governance; see matrix + Exception Decision Record template 08-exception-decision-record-template.md)
  • provide medical/legal/financial advice to customers without approved playbooks and required human review
  • generate instructions for hazardous work (equipment, chemicals, electrical, HSE) outside validated SOPs
  • create deceptive content (impersonation, fraud, misinformation)
  • bypass security controls (phishing, malware generation, exploit instructions)
  • upload/paste Restricted data into unapproved AI tools

9) Data handling rules

9.1 Minimal data classification for AI

Use your organization’s classification scheme. A minimal scheme:

  • D0 Public
  • D1 Internal — routine internal info; not restricted
  • D2 Confidential — contracts, pricing, strategy, client data (non‑PII)
  • D3 Restricted — PII, credentials/secrets, health data, regulated records

9.2 What you may input to AI

  • D0 and D1 data (unless otherwise restricted)
  • Sanitized/aggregated D2 data only in approved tools with access controls

9.3 What you must never input to unapproved AI tools

  • any D3 Restricted data (PII, secrets, credentials, tokens, private keys)
  • privileged legal documents, M&A plans, non‑public customer data

10) Human‑in‑the‑loop (HITL) requirements

  • External outputs (customer/public): must be reviewed and approved by an accountable human before sending/publishing (C‑H1).
  • High‑impact decisions: AI may support research/summarization, but final decisions must be human, documented, and auditable (C‑L2, C‑G1).
  • Safety‑critical work: AI cannot replace SOPs; it may only reference approved procedures.

11) Output quality and truthfulness

Users must:

  • verify facts, numbers, quotes, and sources
  • avoid inventing citations or claiming certainty without evidence
  • clearly mark AI‑generated drafts where required by process

12) Logging, monitoring, and privacy

For approved tools, IT/Security should enable:

  • access logging and periodic access review (C‑L1, C‑A1)
  • monitoring for high‑risk behavior (e.g., mass copy/paste, sensitive keywords)
  • retention controls that minimize stored prompts/outputs while supporting audit needs

13) Incident reporting (AI incidents)

Report an AI incident (or near‑miss) within 24 hours, including:

  • what happened, who was impacted, what data was involved
  • links/screenshots/outputs
  • containment actions taken

Incident channel + owner: (fill)

14) Enforcement

Violations may lead to access removal and disciplinary action up to termination, and vendor access termination where applicable.

Appendix A — Quick decision guide

  • Not sure if a use‑case is allowed? Use the intake matrix in 02a-ai-use-case-matrix.md.
  • If data is D2/D3 and the tool is not explicitly approved for that class: stop and ask IT/Security.

Appendix B — Definitions

  • AI tool: any system that generates content, code, decisions, or recommendations.
  • People harm: physical harm, discrimination, denial of rights/opportunity, financial loss, privacy violation, or dignity harm.
  • Restricted data: any data classified restricted by policy (e.g., PII, secrets, regulated records).
P

PeopleSafetyLab

Expert in AI Safety and Governance at PeopleSafetyLab. Dedicated to building practical frameworks that protect organizations and families, ensuring ethical AI deployment aligned with KSA and international standards.

Share this article: