Documentation requirements vary by framework and risk tier, but the core set that appears across nearly every regulation:
AI system inventory — A register of all AI systems in use, their purpose, risk classification, and ownership.
Risk assessment records — For each system: identified risks, likelihood, impact, and how they are being managed. Must be kept current.
Technical documentation — Model cards or equivalent: training data sources, known limitations, performance metrics, bias testing results, intended and prohibited use cases.
Data governance documentation — How training and inference data is sourced, labelled, stored, and managed.
Human oversight procedures — How humans can intervene in, override, or shut down AI systems, and under what circumstances.
Incident log — Records of AI failures, unexpected outputs, near-misses, and how they were resolved.
Post-market monitoring plan — How the system is monitored in production for performance degradation, drift, or emerging risks.
Running an assessment with this tool generates a gap analysis that tells you exactly which of these you're missing and in what priority order to address them.