Template created as part of Audra Streetman’s TI Essentials article: “Quantifying and communicating the value of human-led intelligence in an AI era” | Published March 26, 2026 | feedly.com/ti-essentials/
Responsible AI Integration Guidelines for CTI Teams
Objective:
Enable the use of AI to improve efficiency while preserving analytic rigor, source fidelity, and long-term analyst skill development.
Scope:
Applies to all intelligence production workflows, including research, drafting, analysis, and dissemination.
Policy Requirements:
1. Source Validation and Analytic Grounding
- Analysts must review and engage with primary source material before finalizing assessments.
- Analysts should select, validate, or have visibility into the sources used as inputs in LLM tools to the greatest extent possible.
- AI-generated summaries should be treated as assistive, not authoritative.
- Key judgments, especially those involving attribution, confidence, or impact, must be traceable to verified sources.
2. Analytic Ownership and Accountability
- Human analysts remain the final arbiter of any analytic judgment and retain full responsibility for all published intelligence products.
- AI-generated content must be validated, contextualized, and refined before inclusion.
- Model verification is not equivalent to analytic approval.
3. AI Usage Transparency
- Intelligence products should disclose how AI was used within the methodology section, where applicable.
- Disclosures should include:
- The type of task supported (e.g., synthesis, drafting, brainstorming)
- Any limitations or validation steps taken