This Threat Intelligence Essentials toolkit page pairs with Audra Streetman's TI Essentials article on measuring human-led CTI value (published March 26, 2026). Copy the guidelines into your team’s internal database, adapt the metrics tables to the tools you already use, and pick one or two metrics per tier to get started.
<aside>
💡 Read the full article for context here
</aside>
(Each resource is available in markdown).
Responsible AI Integration Guidelines for CTI Teams
Objective:
Enable the use of AI to improve efficiency while preserving analytic rigor, source fidelity, and long-term analyst skill development.
Scope:
Applies to all intelligence production workflows, including research, drafting, analysis, and dissemination.
Policy Requirements:
1. Source Validation and Analytic Grounding
- Analysts must review and engage with primary source material before finalizing assessments.
- Analysts should select, validate, or have visibility into the sources used as inputs in LLM tools to the greatest extent possible.
- AI-generated summaries should be treated as assistive, not authoritative.
- Key judgments, especially those involving attribution, confidence, or impact, must be traceable to verified sources.
2. Analytic Ownership and Accountability
- Human analysts remain the final arbiter of any analytic judgment and retain full responsibility for all published intelligence products.
- AI-generated content must be validated, contextualized, and refined before inclusion.
- Model verification is not equivalent to analytic approval.