Template created as part of Audra Streetman’s TI Essentials article: “Quantifying and communicating the value of human-led intelligence in an AI era” | Published March 26, 2026 | feedly.com/ti-essentials/


Responsible AI Integration Guidelines for CTI Teams

Objective:

Enable the use of AI to improve efficiency while preserving analytic rigor, source fidelity, and long-term analyst skill development.

Scope:

Applies to all intelligence production workflows, including research, drafting, analysis, and dissemination.

Policy Requirements:

1. Source Validation and Analytic Grounding

  1. Analysts must review and engage with primary source material before finalizing assessments.
  2. Analysts should select, validate, or have visibility into the sources used as inputs in LLM tools to the greatest extent possible.
  3. AI-generated summaries should be treated as assistive, not authoritative.
  4. Key judgments, especially those involving attribution, confidence, or impact, must be traceable to verified sources.

2. Analytic Ownership and Accountability

  1. Human analysts remain the final arbiter of any analytic judgment and retain full responsibility for all published intelligence products.
  2. AI-generated content must be validated, contextualized, and refined before inclusion.
  3. Model verification is not equivalent to analytic approval.

3. AI Usage Transparency

  1. Intelligence products should disclose how AI was used within the methodology section, where applicable.
  2. Disclosures should include:
    1. The type of task supported (e.g., synthesis, drafting, brainstorming)
    2. Any limitations or validation steps taken