As AI governance moves from theoretical principles to hard compliance requirements, legal and privacy professionals need more than just high-level advice—they need the actual text of standards, frameworks, and implementation guides.
Whether you are preparing for the IAPP’s AI Governance Professional (AIGP) certification or simply building a defensible AI compliance program for your organization, having the right library is essential.
Below is a curated, up-to-date resource guide. It builds on the industry-standard Unofficial AIGP Resource Guide by Oliver Patel, preserving the core resources that remain relevant while adding critical 2025 updates.
Disclaimer: These resources are for educational purposes and do not constitute legal advice. If you need support with AI contracts, policies, or compliance, please contact Soundmark Law.
Part 1: The “Core Pack” (Start Here)
If you only bookmark a few links, make it these. These are the foundational documents that define the current global standard for responsible AI.
- NIST AI Risk Management Framework (AI RMF 1.0): The U.S. standard for managing AI risk.
- EU AI Act (Official Text): Regulation (EU) 2024/1689 (Official Journal).
- ISO/IEC 42001:2023: The global standard for AI Management Systems.
- OECD AI Principles: The baseline for international policy.
- Stanford AI Index Report: The definitive data on AI trends (Annual).
- State of AI Report: Industry and research analysis (Annual).
- OWASP Top 10 for LLM Applications: Security risks specific to Generative AI.
- CSA AI Controls Matrix: Auditable controls for AI systems.
Part 2: Comprehensive Resource Library
Organized by the 4 Domains of the AIGP Body of Knowledge.
Domain I: Foundations of AI Governance
Understanding what AI is, why it poses risks, and how to structure a governance program.
I.A What AI is and why it needs governance
- Key Terms for AI Governance (IAPP)
- Machine Learning Glossary (Google)
- Introduction to Deep Learning (MIT)
- OECD AI System Definition
- Designing Machine Learning Systems (Chip Huyen)
- International AI Safety Report (UK Government)
- AI Incident Database (Partnership on AI)
- OECD AI Incidents Monitor
- The AI Risk Repository (MIT FutureTech)
I.B Establishing the Governance Program
- AI Governance in Practice Report (IAPP & FTI)
- Building Accountable AI Programs (CIPL)
- Advancing Accountability in AI (OECD)
- AI Management Essentials (UK Government)
- Living Guidelines on the Responsible Use of Generative AI (European Commission)
I.C Policies and Procedures
- Generative AI for Organizational Use: Internal Policy Considerations (Future of Privacy Forum)
- Microsoft Responsible AI Standard v2
- Model AI Governance Framework for Generative AI (AI Verify Foundation)
Domain II: Laws, Standards, and Frameworks
Navigating the regulatory landscape.
II.A Privacy, Data Protection, and AI
- AI, Data Governance and Privacy (OECD)
- Guidance on AI and Data Protection (UK ICO)
- AI Systems and the GDPR (Belgian DPA)
- A Survey of Machine Unlearning (ArXiv)
II.B Liability, IP, and Discrimination
- Database of AI Litigation (George Washington University)
- EU Product Liability Directive (2024 Revision)
- Copyright and Artificial Intelligence (U.S. Copyright Office)
- FTC Guidance on AI Claims (FTC Business Blog)
II.C The EU AI Act
- EU AI Act Official Legal Text (EUR-Lex)
- European AI Office: The main hub for implementation guidance.
- EU AI Act: A Guide (Bird & Bird)
II.D Standards and Trackers
- Global AI Law and Policy Tracker (IAPP)
- AI Standards Hub (Alan Turing Institute)
- ISO/IEC 42001:2023 (AI Management System)
- ISO/IEC 22989:2022 (AI Concepts and Terminology)
- Council of Europe Framework Convention on AI
Domain III: Governing AI Development
Managing risk during the build and test phases.
III.A Designing and Building
- Building LLM Applications for Production (Chip Huyen)
- System Card: OpenAI o1 (OpenAI)
- Llama 3 Model Card (Meta)
III.B Data Governance
III.C Release and Maintenance
- Adversarial Machine Learning: A Taxonomy (NIST AI 100-2)
- Portfolio of AI Assurance Techniques (UK Government)
- Catalogue of Tools & Metrics for Trustworthy AI (OECD)
Domain IV: Governing Deployment and Use
Procurement, evaluation, and operational monitoring.
IV.A Deployment Considerations
- Understanding Large Language Models (Sebastian Raschka)
- Dual-Use Foundation Models Report (NTIA)
- Retrieval Augmented Generation (RAG) Survey (ArXiv)
IV.B Assessment and Procurement
- Holistic Evaluation of Language Models (HELM) (Stanford CRFM)
- Model Risk Management (SR 11-7) (US OCC)
- FEAT Principles (Monetary Authority of Singapore)
IV.C Real-World Use and Red Teaming
- OWASP Top 10 for LLM Applications
- Responsible Scaling Policy (Anthropic)
- Frontier Safety Framework (Google DeepMind)
- What is Red Teaming for GenAI? (IBM)
Part 3: The 2025 “Add-On Pack”
New and emerging resources that are critical for operationalizing governance this year.
- ISO/IEC 42005:2024 (AI System Impact Assessment): The official standard for documenting AI impacts.
- EU General-Purpose AI Code of Practice Process: The AI Office hub for the Code of Practice.
- NIST Cybersecurity Framework Profile for AI (Draft): NIST IR 8596 – Integrating AI into cybersecurity.
- MITRE ATLAS: Threat intelligence and tactics for AI systems.
- Canada’s Voluntary Code of Conduct for Generative AI: Official text and signatories.
Need Help Operationalizing This?
Having the resources is step one. Building the program is step two.
At Soundmark Law, we help organizations translate these frameworks into clear contracts, defensible policies, and practical risk assessments. If you need assistance navigating the intersection of AI technology, intellectual property, and compliance, contact us today.
