AI Due Diligence in M&A

What Boards and Executives Must Know Before the Deal Closes

Introduction: Why AI Is the New Risk Frontier in M&A

Artificial Intelligence (AI) is increasingly central to modern business models and corporate valuations. It drives predictive analytics, automates decision-making processes, personalizes customer experiences, and enables operational efficiencies. However, as businesses pursue AI-driven acquisitions, the risks hidden within AI systems often go undetected during traditional M&A due diligence. These risks - ranging from technical fragility and ethical lapses to regulatory exposure and talent attrition - can transform a promising acquisition into a costly liability.

Boards and executives must understand that acquiring AI means acquiring systems that continuously learn, adapt, and evolve based on data and inputs. These systems can be opaque, data-hungry, and regulation-sensitive. The consequences of mishandling AI due diligence go far beyond IT integration - they extend into governance, compliance, and long-term strategic risk.

This article explores real-world examples where AI has gone wrong, outlines a governance-aligned due diligence framework, and details the consequences for boards, executives, and organizations if AI risks are ignored.

When AI Goes Wrong: Seven Real-World Examples

Strategic Overstatement

IBM’s acquisition of Truven for its Watson Health division promised to revolutionize healthcare with AI . However, integration challenges and mismatches between Truven’s structured data and Watson’s unstructured processing model led to failure. The result was an inability to deliver on strategic promises, culminating in divestiture. This highlights the critical importance of aligning AI assets with strategic outcomes and ensuring value creation is measurable, not assumed.

Lifecycle Chaos

Zillow’s AI-based home-flipping program used models to forecast housing prices. The models failed to account for sudden market shifts and were not designed with adequate retraining or rollback mechanisms. The failure resulted in a write-down exceeding $880 million and the shutdown of the program. This example illustrates the need for robust AI lifecycle governance, including model drift detection, version control, and retraining triggers.

Data Governance Failures

Clearview AI scraped billions of facial images from public websites to build its facial recognition tool. The lack of consent and the controversial use of biometric data triggered global regulatory backlash, lawsuits, and brand damage. This underscores the need to verify dataset legality and ethical sourcing, particularly in privacy-sensitive jurisdictions.

Security Gaps

Microsoft’s AI chatbot, Tay, was released on Twitter and trained on user input. It was quickly manipulated into producing offensive content. Tay lacked adversarial robustness and content filters, leading to a rapid and public failure. Security and resilience must be central to AI deployments, especially those interfacing with the public.

Bias and Explainability Lapses

Amazon developed an internal AI tool to automate résumé screening. It learned from biased historical data and began penalizing applications containing references to women’s groups. The tool was abandoned. This case emphasizes the importance of fairness audits, explainability, and bias mitigation in models used for human resource and high-impact decisions.

Licensing and Intellectual Property Risks

GitHub’s Copilot, an AI-powered coding assistant, was trained on open-source code. Legal challenges emerged around whether Copilot’s outputs infringed software licenses. Organizations acquiring AI must investigate the provenance of training data and ensure intellectual property rights are clearly documented.

Talent Attrition

Google’s acquisition of DeepMind Health led to talent exits due to ethical concerns and cultural misalignment. NHS partners expressed distrust in the post-acquisition governance of health data. The case highlights the importance of cultural integration planning and governance continuity for AI teams.

The 7-Domain Framework for AI Due Diligence

To effectively govern AI acquisitions, a board-ready due diligence framework is required. I developed this framework to span seven interconnected domains of Strategic Fit, Model Lifecycle Governance, Data Governance and Provenance, Security and Resilience, Ethics, Fairness, and Explainability, Legal and Intellectual Property, and Talent and Cultural Integration.

It reflects on and aligns with components found across several well-known governance and standards frameworks, including:

Domain 1: Strategic Fit

AI assets should clearly align with business goals. Before acquisition:

  • Map AI systems to expected financial and operational outcomes
  • Confirm whether AI is a core differentiator or a complementary capability
  • Evaluate if innovation KPIs and AI-driven ROI are being tracked

Map AI systems to expected financial and operational outcomesThe first consideration in any AI acquisition should be strategic alignment. Boards need clarity on whether the AI capability being acquired supports the acquirer's current business strategy or is intended to extend it into adjacent markets. Without this clarity, organizations risk investing in technologies that create more operational complexity than value.

Confirm whether AI is a core differentiator or a complementary capabilityA structured fit-for-purpose review must assess how the AI system contributes to financial performance, whether through cost savings, revenue enhancement, or improved risk management. ISO/IEC 42001 emphasizes the need for alignment between AI initiatives and the broader management system of the organization. AI should not be a side project but a scalable capability contributing to KPIs.

Evaluate if innovation KPIs and AI-driven ROI are being trackedAnother critical factor is determining whether the AI asset is a core intellectual property or simply a supporting automation layer. This distinction impacts valuation, integration, and post-close prioritization. Boards must also confirm the presence of innovation KPIs. If such measures are absent, the acquiring firm may inherit a black-box investment with no performance accountability.

Domain 2: Model Lifecycle Governance

Examine how AI models are trained, validated, deployed, monitored, and retired:

  • Ensure the target organization maintains a model registry
  • Confirm use of version control and automated triggers for retraining
  • Review rollback procedures and scenario simulations under different operating conditions

Ensure the target organization maintains a model registryLifecycle governance of AI models is critical to operational stability. Unlike traditional software, AI systems require ongoing retraining, tuning, and adaptation. Poor lifecycle governance leads to model drift, reduced accuracy, and increased risk exposure.

Confirm use of version control and automated triggers for retrainingBoards should ask if the target organization maintains a formal model registry and uses version control mechanisms. These practices align with ISO/IEC 42001's emphasis on configuration and change management. Retraining mechanisms must be explicitly documented and triggered by defined thresholds.

Review rollback procedures and scenario simulations under different operating conditionsScenario testing is another key aspect. Boards should review the presence of simulation environments or rollback protocols. COBIT provides guidance on IT risk response and resilience that can be adapted for AI, ensuring business continuity when AI systems falter.

Domain 3: Data Governance and Provenance

Investigate the sources, ownership, consent basis, and quality of training data:

  • Validate data lineage, bias controls, and synthetic data generation
  • Confirm compliance with global privacy regulations such as GDPR and Australia’s Privacy Act
  • Assess alignment with your organization’s ethical and data usage policies

Validate data lineage, bias controls, and synthetic data generationData is the foundational input for AI, and poor data governance undermines everything else. Boards must ensure that the data used to train and operate AI systems is ethically sourced, legally compliant, and technically sound. The DAMA-DMBOK2 framework and ISO/IEC 38505-1 offer a structure for evaluating the control environment around data.

Confirm compliance with global privacy regulations such as GDPR and Australia’s Privacy ActOne critical risk is the use of data obtained without appropriate consent. This is especially relevant in jurisdictions like the EU (GDPR) and Australia (Privacy Act 1988), where consent, purpose limitation, and data minimization are legal requirements.

Assess alignment with your organization’s ethical and data usage policiesBias and fairness audits should be mandatory. Data sampling must reflect the diversity of the operational environment, and synthetic data should be validated against real-world outcomes. Boards should seek evidence of alignment with internal ethical and data usage policies.

Domain 4: Security and Resilience

AI systems must be tested against a range of threats:

  • Perform assessments for adversarial attacks, data poisoning, and inference manipulation
  • Evaluate development pipeline security, access controls, and endpoint monitoring
  • Ensure AI is integrated into enterprise cybersecurity architecture

Perform assessments for adversarial attacks, data poisoning, and inference manipulationAI systems introduce new attack surfaces, from poisoned training data to manipulated inputs during inference. Security measures must span the full AI development pipeline, aligning with ISO/IEC 27001 and ISA/IEC 62443 standards.

Evaluate development pipeline security, access controls, and endpoint monitoringDevelopment environments, particularly CI/CD pipelines used for deploying AI models, must follow secure coding practices. This includes code reviews, automated security testing, and access controls.

Ensure AI is integrated into enterprise cybersecurity architectureBoards should ask how AI is integrated into existing security monitoring and response frameworks. NIST AI RMF and COBIT provide a foundation for embedding AI risk into enterprise cybersecurity programs.

Domain 5: Ethics, Fairness, and Explainability

Demand robust ethical governance:

  • Require evidence of bias testing and explainability practices
  • Confirm high-risk decisions involve human oversight and auditability
  • Evaluate the presence and function of ethics committees or responsible AI leadership

Require evidence of bias testing and explainability practicesEthical oversight of AI is no longer optional. Boards must verify that the AI being acquired adheres to ethical principles such as fairness, non-discrimination, and transparency. OECD AI Principles and Australia's AI Ethics Principles provide relevant frameworks.

Confirm high-risk decisions involve human oversight and auditabilityExplainability is especially important in regulated industries like finance and healthcare, where decisions must be auditable. Boards should confirm that human-in-the-loop mechanisms are used for high-risk AI decisions.

Evaluate the presence and function of ethics committees or responsible AI leadershipBoards should look for internal ethics committees or designated Responsible AI roles that evaluate model impact and provide governance. ISO/IEC 42001 includes expectations for ethics and stakeholder engagement.

Domain 6: Legal and Intellectual Property

Legal clarity is essential:

  • Obtain Software Bill of Materials (SBOM) for models and datasets
  • Review license agreements, indemnities, and patents
  • Confirm commercialization rights and third-party compliance

Obtain SBOMs for models and datasetsOne key artifact is the Software Bill of Materials (SBOM), which catalogues all dependencies used in an AI system. This aligns with ISO/IEC 5230 OpenChain guidance for open-source compliance.

Review license agreements, indemnities, and patentsBoards must verify whether the IP used in the models and training data is owned, licensed, or co-developed. License restrictions may limit use cases or expose the acquiring firm to legal claims.

Confirm commercialization rights and third-party complianceAnother legal concern is the output of generative models. Organizations should confirm they have rights to use, distribute, and commercialize the model outputs. Trade secret protections and data contracts must also be reviewed.

Domain 7: Talent and Cultural Integration

People and governance culture must align:

  • Identify key AI staff and ensure retention plans are in place
  • Evaluate cultural fit and governance compatibility across teams
  • Plan for knowledge transfer and leadership continuity post-close

Identify key AI staff and ensure retention plans are in placePeople are at the heart of successful AI. Boards must ensure that top engineers, data scientists, and AI leaders have compelling reasons to stay post-close. ISO/IEC 42001 includes clauses on resourcing and competence.

Evaluate cultural fit and governance compatibility across teamsCultural integration is another challenge. COBIT's governance and culture domains help assess whether the target's development philosophy aligns with your organization.

Plan for knowledge transfer and leadership continuity post-closeFinally, ensure knowledge transfer through documentation, cross-training, and handover sessions. Boards should ask whether the AI team follows standardized development practices and DevOps or MLOps workflows.

Board-Level Tools for Oversight

To ensure effective oversight and risk mitigation, boards should implement:

  • AI risk registers detailing technical, ethical, legal, and operational risks
  • Governance dashboards showing AI usage, compliance, and maturity indicators
  • Structured due diligence checklists aligned to the seven domains
  • A 90-day post-close integration roadmap covering risk remediation and cultural alignment

What Happens If AI Risks Are Ignored - An Australian Context

Organizations that overlook AI risks expose themselves to serious consequences:

  • Revenue loss, asset write-downs, and disrupted operations when models fail
  • Regulatory action and fines under data protection and AI regulations
  • Public trust erosion due to AI bias, unfairness, or poor explainability

For executives:

  • Legal and financial exposure for failure to disclose known risks
  • Reputational harm or job loss due to oversight failure
  • Breach of Section 180 of the Corporations Act 2001 (Cth) for directors who neglect their duty of care in AI oversight

For boards:

  • Class actions for governance failure
  • ASIC or OAIC investigation for systemic oversight lapses
  • Long-term damage to the organization's social licence to operate

Conclusion: Buying AI Means Buying Accountability

When you acquire AI, you acquire more than algorithms—you acquire systems of intelligence that influence decisions, shape user outcomes, and carry ethical, legal, and operational risks. Boards and executives must embrace rigorous, structured, and standards-aligned AI due diligence.

By applying the seven-domain framework and establishing strong governance and cultural integration practices, organizations can capture the benefits of AI - while protecting themselves from the liabilities that have derailed others.

References