AI Due Diligence in M&A
What Boards and Executives Must Know Before the Deal Closes
Introduction: Why AI Is the New Risk Frontier in M&A
Artificial Intelligence (AI) is increasingly central to modern business models and corporate valuations. It drives predictive analytics, automates decision-making processes, personalizes customer experiences, and enables operational efficiencies. However, as businesses pursue AI-driven acquisitions, the risks hidden within AI systems often go undetected during traditional M&A due diligence. These risks - ranging from technical fragility and ethical lapses to regulatory exposure and talent attrition - can transform a promising acquisition into a costly liability.
Boards and executives must understand that acquiring AI means acquiring systems that continuously learn, adapt, and evolve based on data and inputs. These systems can be opaque, data-hungry, and regulation-sensitive. The consequences of mishandling AI due diligence go far beyond IT integration - they extend into governance, compliance, and long-term strategic risk.
This article explores real-world examples where AI has gone wrong, outlines a governance-aligned due diligence framework, and details the consequences for boards, executives, and organizations if AI risks are ignored.
When AI Goes Wrong: Seven Real-World Examples
Strategic Overstatement
IBM’s acquisition of Truven for its Watson Health division promised to revolutionize healthcare with AI . However, integration challenges and mismatches between Truven’s structured data and Watson’s unstructured processing model led to failure. The result was an inability to deliver on strategic promises, culminating in divestiture. This highlights the critical importance of aligning AI assets with strategic outcomes and ensuring value creation is measurable, not assumed.
Lifecycle Chaos
Zillow’s AI-based home-flipping program used models to forecast housing prices. The models failed to account for sudden market shifts and were not designed with adequate retraining or rollback mechanisms. The failure resulted in a write-down exceeding $880 million and the shutdown of the program. This example illustrates the need for robust AI lifecycle governance, including model drift detection, version control, and retraining triggers.
Data Governance Failures
Clearview AI scraped billions of facial images from public websites to build its facial recognition tool. The lack of consent and the controversial use of biometric data triggered global regulatory backlash, lawsuits, and brand damage. This underscores the need to verify dataset legality and ethical sourcing, particularly in privacy-sensitive jurisdictions.
Security Gaps
Microsoft’s AI chatbot, Tay, was released on Twitter and trained on user input. It was quickly manipulated into producing offensive content. Tay lacked adversarial robustness and content filters, leading to a rapid and public failure. Security and resilience must be central to AI deployments, especially those interfacing with the public.
Bias and Explainability Lapses
Amazon developed an internal AI tool to automate résumé screening. It learned from biased historical data and began penalizing applications containing references to women’s groups. The tool was abandoned. This case emphasizes the importance of fairness audits, explainability, and bias mitigation in models used for human resource and high-impact decisions.
Licensing and Intellectual Property Risks
GitHub’s Copilot, an AI-powered coding assistant, was trained on open-source code. Legal challenges emerged around whether Copilot’s outputs infringed software licenses. Organizations acquiring AI must investigate the provenance of training data and ensure intellectual property rights are clearly documented.
Talent Attrition
Google’s acquisition of DeepMind Health led to talent exits due to ethical concerns and cultural misalignment. NHS partners expressed distrust in the post-acquisition governance of health data. The case highlights the importance of cultural integration planning and governance continuity for AI teams.
The 7-Domain Framework for AI Due Diligence
To effectively govern AI acquisitions, a board-ready due diligence framework is required. I developed this framework to span seven interconnected domains of Strategic Fit, Model Lifecycle Governance, Data Governance and Provenance, Security and Resilience, Ethics, Fairness, and Explainability, Legal and Intellectual Property, and Talent and Cultural Integration.
It reflects on and aligns with components found across several well-known governance and standards frameworks, including:
- ISO/IEC 42001 – AI management system (strategy, ethics, security, lifecycle)
- NIST AI RMF – Risk-based governance for AI
- DAMA DMBOK2 – Data governance (provenance, quality, metadata)
- COBIT / ISO 27001 / ISA/IEC 62443 – Security and operational controls
- DAMA-DMBOK2 and ISO/IEC 38505-1 – Data governance and control frameworks applicable across industries, including boards overseeing regulated sectors such as finance, healthcare, and critical infrastructure.
- OECD AI Principles / Australian AI Ethics Principles – Ethics, fairness, transparency, and accountability.
Domain 1: Strategic Fit
AI assets should clearly align with business goals. Before acquisition:
- Map AI systems to expected financial and operational outcomes
- Confirm whether AI is a core differentiator or a complementary capability
- Evaluate if innovation KPIs and AI-driven ROI are being tracked
Map AI systems to expected financial and operational outcomesThe first consideration in any AI acquisition should be strategic alignment. Boards need clarity on whether the AI capability being acquired supports the acquirer's current business strategy or is intended to extend it into adjacent markets. Without this clarity, organizations risk investing in technologies that create more operational complexity than value.
Confirm whether AI is a core differentiator or a complementary capabilityA structured fit-for-purpose review must assess how the AI system contributes to financial performance, whether through cost savings, revenue enhancement, or improved risk management. ISO/IEC 42001 emphasizes the need for alignment between AI initiatives and the broader management system of the organization. AI should not be a side project but a scalable capability contributing to KPIs.
Evaluate if innovation KPIs and AI-driven ROI are being trackedAnother critical factor is determining whether the AI asset is a core intellectual property or simply a supporting automation layer. This distinction impacts valuation, integration, and post-close prioritization. Boards must also confirm the presence of innovation KPIs. If such measures are absent, the acquiring firm may inherit a black-box investment with no performance accountability.
Domain 2: Model Lifecycle Governance
Examine how AI models are trained, validated, deployed, monitored, and retired:
- Ensure the target organization maintains a model registry
- Confirm use of version control and automated triggers for retraining
- Review rollback procedures and scenario simulations under different operating conditions
Ensure the target organization maintains a model registryLifecycle governance of AI models is critical to operational stability. Unlike traditional software, AI systems require ongoing retraining, tuning, and adaptation. Poor lifecycle governance leads to model drift, reduced accuracy, and increased risk exposure.
Confirm use of version control and automated triggers for retrainingBoards should ask if the target organization maintains a formal model registry and uses version control mechanisms. These practices align with ISO/IEC 42001's emphasis on configuration and change management. Retraining mechanisms must be explicitly documented and triggered by defined thresholds.
Review rollback procedures and scenario simulations under different operating conditionsScenario testing is another key aspect. Boards should review the presence of simulation environments or rollback protocols. COBIT provides guidance on IT risk response and resilience that can be adapted for AI, ensuring business continuity when AI systems falter.
Domain 3: Data Governance and Provenance
Investigate the sources, ownership, consent basis, and quality of training data:
- Validate data lineage, bias controls, and synthetic data generation
- Confirm compliance with global privacy regulations such as GDPR and Australia’s Privacy Act
- Assess alignment with your organization’s ethical and data usage policies
Validate data lineage, bias controls, and synthetic data generationData is the foundational input for AI, and poor data governance undermines everything else. Boards must ensure that the data used to train and operate AI systems is ethically sourced, legally compliant, and technically sound. The DAMA-DMBOK2 framework and ISO/IEC 38505-1 offer a structure for evaluating the control environment around data.
Confirm compliance with global privacy regulations such as GDPR and Australia’s Privacy ActOne critical risk is the use of data obtained without appropriate consent. This is especially relevant in jurisdictions like the EU (GDPR) and Australia (Privacy Act 1988), where consent, purpose limitation, and data minimization are legal requirements.
Assess alignment with your organization’s ethical and data usage policiesBias and fairness audits should be mandatory. Data sampling must reflect the diversity of the operational environment, and synthetic data should be validated against real-world outcomes. Boards should seek evidence of alignment with internal ethical and data usage policies.
Domain 4: Security and Resilience
AI systems must be tested against a range of threats:
- Perform assessments for adversarial attacks, data poisoning, and inference manipulation
- Evaluate development pipeline security, access controls, and endpoint monitoring
- Ensure AI is integrated into enterprise cybersecurity architecture
Perform assessments for adversarial attacks, data poisoning, and inference manipulationAI systems introduce new attack surfaces, from poisoned training data to manipulated inputs during inference. Security measures must span the full AI development pipeline, aligning with ISO/IEC 27001 and ISA/IEC 62443 standards.
Evaluate development pipeline security, access controls, and endpoint monitoringDevelopment environments, particularly CI/CD pipelines used for deploying AI models, must follow secure coding practices. This includes code reviews, automated security testing, and access controls.
Ensure AI is integrated into enterprise cybersecurity architectureBoards should ask how AI is integrated into existing security monitoring and response frameworks. NIST AI RMF and COBIT provide a foundation for embedding AI risk into enterprise cybersecurity programs.
Domain 5: Ethics, Fairness, and Explainability
Demand robust ethical governance:
- Require evidence of bias testing and explainability practices
- Confirm high-risk decisions involve human oversight and auditability
- Evaluate the presence and function of ethics committees or responsible AI leadership
Require evidence of bias testing and explainability practicesEthical oversight of AI is no longer optional. Boards must verify that the AI being acquired adheres to ethical principles such as fairness, non-discrimination, and transparency. OECD AI Principles and Australia's AI Ethics Principles provide relevant frameworks.
Confirm high-risk decisions involve human oversight and auditabilityExplainability is especially important in regulated industries like finance and healthcare, where decisions must be auditable. Boards should confirm that human-in-the-loop mechanisms are used for high-risk AI decisions.
Evaluate the presence and function of ethics committees or responsible AI leadershipBoards should look for internal ethics committees or designated Responsible AI roles that evaluate model impact and provide governance. ISO/IEC 42001 includes expectations for ethics and stakeholder engagement.
Domain 6: Legal and Intellectual Property
Legal clarity is essential:
- Obtain Software Bill of Materials (SBOM) for models and datasets
- Review license agreements, indemnities, and patents
- Confirm commercialization rights and third-party compliance
Obtain SBOMs for models and datasetsOne key artifact is the Software Bill of Materials (SBOM), which catalogues all dependencies used in an AI system. This aligns with ISO/IEC 5230 OpenChain guidance for open-source compliance.
Review license agreements, indemnities, and patentsBoards must verify whether the IP used in the models and training data is owned, licensed, or co-developed. License restrictions may limit use cases or expose the acquiring firm to legal claims.
Confirm commercialization rights and third-party complianceAnother legal concern is the output of generative models. Organizations should confirm they have rights to use, distribute, and commercialize the model outputs. Trade secret protections and data contracts must also be reviewed.
Domain 7: Talent and Cultural Integration
People and governance culture must align:
- Identify key AI staff and ensure retention plans are in place
- Evaluate cultural fit and governance compatibility across teams
- Plan for knowledge transfer and leadership continuity post-close
Identify key AI staff and ensure retention plans are in placePeople are at the heart of successful AI. Boards must ensure that top engineers, data scientists, and AI leaders have compelling reasons to stay post-close. ISO/IEC 42001 includes clauses on resourcing and competence.
Evaluate cultural fit and governance compatibility across teamsCultural integration is another challenge. COBIT's governance and culture domains help assess whether the target's development philosophy aligns with your organization.
Plan for knowledge transfer and leadership continuity post-closeFinally, ensure knowledge transfer through documentation, cross-training, and handover sessions. Boards should ask whether the AI team follows standardized development practices and DevOps or MLOps workflows.
Board-Level Tools for Oversight
To ensure effective oversight and risk mitigation, boards should implement:
- AI risk registers detailing technical, ethical, legal, and operational risks
- Governance dashboards showing AI usage, compliance, and maturity indicators
- Structured due diligence checklists aligned to the seven domains
- A 90-day post-close integration roadmap covering risk remediation and cultural alignment
What Happens If AI Risks Are Ignored - An Australian Context
Organizations that overlook AI risks expose themselves to serious consequences:
- Revenue loss, asset write-downs, and disrupted operations when models fail
- Regulatory action and fines under data protection and AI regulations
- Public trust erosion due to AI bias, unfairness, or poor explainability
For executives:
- Legal and financial exposure for failure to disclose known risks
- Reputational harm or job loss due to oversight failure
- Breach of Section 180 of the Corporations Act 2001 (Cth) for directors who neglect their duty of care in AI oversight
For boards:
- Class actions for governance failure
- ASIC or OAIC investigation for systemic oversight lapses
- Long-term damage to the organization's social licence to operate
Conclusion: Buying AI Means Buying Accountability
When you acquire AI, you acquire more than algorithms—you acquire systems of intelligence that influence decisions, shape user outcomes, and carry ethical, legal, and operational risks. Boards and executives must embrace rigorous, structured, and standards-aligned AI due diligence.
By applying the seven-domain framework and establishing strong governance and cultural integration practices, organizations can capture the benefits of AI - while protecting themselves from the liabilities that have derailed others.
References
- ACLU: Why Amazon’s Automated Hiring Tool Discriminated Against WomenThe ACLU discusses how Amazon's AI tool, trained on resumes predominantly from male applicants, developed biases that disadvantaged women, emphasizing the need for fairness audits in AI systems.https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against
- AI Business: Microsoft, GitHub, OpenAI Hit with Code Copyright LawsuitThis article covers the lawsuit filed against Microsoft, GitHub, and OpenAI, accusing them of violating open-source licenses by using code without proper attribution in GitHub Copilot.https://aibusiness.com/companies/microsoft-github-openai-hit-by-code-copyright-lawsuit
- An AI Firm Harvested Billions of Photos Without ConsentThe UK's Information Commissioner's Office fined Clearview AI £7.5 million for unlawfully collecting data of British citizens, ordering the deletion of their images from its database.https://www.politico.eu/article/ai-ruling-obstruct-british-efforts-protect-citizens-images-us-data-harvesting/
- Australian AI Ethics PrinciplesDeveloped by the Australian Government and supported by CSIRO’s Data61, these principles guide the ethical development and use of AI across sectors. They promote human-centred, safe, and accountable AI practices, emphasising wellbeing, fairness, privacy, transparency, and contestability. Though voluntary, they are designed for broad application across industry, government, and research, and are aligned with the OECD AI Principles while reflecting Australian societal values and legal context.https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles/australias-ai-ethics-principles
- Australian Institute of Company Director Guidance on Section 180 ComplianceThe AICD’s new practice statement helps directors navigate growing non-financial risk and compliance obligations under Section 180 of the Corporations Act. It provides practical guidance on monitoring risks like cybersecurity, AI, and sustainability, highlighting red flags and the importance of critical oversight without requiring technical expertise.https://www.aicd.com.au/board-of-directors/duties/liabilities-of-directors/guidance-on-section-180-compliance.html
- Australian Privacy PrinciplesAustralia’s Privacy Act includes 13 Australian Privacy Principles (APPs). The principles are detailed in the Privacy Act 1988 (Australia).https://www.oaic.gov.au/privacy/australian-privacy-principles
- BBC News: Amazon scrapped 'sexist AI' toolThe BBC reports on Amazon's decision to abandon its AI hiring tool due to its bias against women, highlighting concerns about algorithmic discrimination in recruitment processes.https://www.bbc.com/news/technology-45809919
- BBC News: DeepMind faces legal action over NHS data useThis article discusses the legal challenges faced by DeepMind after it was revealed that the company had access to 1.6 million NHS patient records without proper consent, leading to significant privacy concerns.https://www.bbc.com/news/technology-58761324
- BBC News: Microsoft chatbot is taught to swear on TwitterThis article details how Tay was manipulated by Twitter users to produce offensive content, leading to its rapid shutdown.https://www.bbc.com/news/technology-35890188
- Business Insider: No Surprise Amazon's AI Was Biased Against Women, Says Sandra WachterThis article features expert opinions on Amazon's biased AI hiring tool, discussing the broader implications of algorithmic bias in employment practices.https://www.businessinsider.com/amazon-ai-biased-against-women-no-surprise-sandra-wachter-2018-10
- Case Study: The $4 Billion AI Failure of IBM Watson for OncologyThis case study analyzes the shortcomings of IBM Watson for Oncology, offering insights into the broader issues faced by Watson Health, including integration challenges and unmet expectations.https://www.henricodolfing.com/2024/12/case-study-ibm-watson-for-oncology-failure.html
- Clearview AI and the End of Privacy, with Author Kashmir HillThis interview explores the implications of Clearview AI's technology on personal privacy, discussing how the company's practices have sparked global debates over surveillance and data rights.https://www.theverge.com/23919134/kashmir-hill-your-face-belongs-to-us-clearview-ai-facial-recognition-privacy-decoder
- Clearview AI: Face-Collecting Company Database HackedClearview AI experienced a data breach that exposed its client list, intensifying scrutiny over its data collection practices and the security of its facial recognition technology.Clearview AI: Face-collecting company database hacked
- Clearview AI Fined $33M for Facial Recognition Image ScrapingThe Dutch Data Protection Authority fined Clearview AI €30.5 million for illegally collecting facial images without consent, highlighting significant violations of the EU's GDPR.https://www.forbes.com/sites/roberthart/2024/09/03/clearview-ai-controversial-facial-recognition-firm-fined-33-million-for-illegal-database/
- COBIT 2019 – Control Objectives for Information and Related TechnologyPublished by ISACA, COBIT is a globally recognized framework for IT governance and management. COBIT 2019 supports enterprise alignment between IT and business goals and offers structured guidance on governance components, performance measurement, and risk management. It is often used to assess governance maturity across AI, cybersecurity, data, and digital transformation programs.https://www.isaca.org/resources/cobit
- Corporations Act 2001The Corporations Act 2001 (Cth) is the principal legislation governing companies in Australia. It sets out the legal framework for corporate regulation, including company formation, directors' duties, financial reporting, takeovers, and insolvency. Administered by ASIC, the Act aims to promote investor confidence, corporate accountability, and fair business practices.Corporations Act 2001 - Federal Register of Legislation
- DAMA-DMBOK2 – Data Management Body of KnowledgePublished by DAMA International, the DAMA-DMBOK2 provides a comprehensive framework for data management professionals. It outlines best practices, principles, and techniques across various data management disciplines, promoting consistency and efficiency in data governance. https://www.dama.org/cpages/body-of-knowledge
- EU Artificial Intelligence Act (Proposed)The EU AI Act is a first-of-its-kind legal framework for AI systems, categorizing them by risk level (unacceptable, high, limited, and minimal). It imposes strict requirements on high-risk AI systems used in areas like critical infrastructure, employment, and law enforcement—including transparency, human oversight, data quality, and post-market monitoring. The Act complements GDPR and is expected to set a global precedent for regulating AI safety and accountability.https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- Finnegan: Insights from the Pending Copilot Class Action LawsuitFinnegan provides an analysis of the ongoing class-action lawsuit against GitHub Copilot, discussing the allegations of Digital Millennium Copyright Act (DMCA) violations and breaches of open-source licenses.https://www.finnegan.com/en/insights/articles/insights-from-the-pending-copilot-class-action-lawsuit.html
- Francisco Partners Scoops Up Bulk of IBM’s Watson Health UnitTechCrunch reports on the acquisition of Watson Health's data assets by Francisco Partners, providing context on the challenges and outcomes of IBM's foray into healthcare AI.https://techcrunch.com/2022/01/21/francisco-partners-scoops-up-remains-of-ibms-watson-health-unit/
- FOSSA: Analyzing the Legal Implications of GitHub CopilotFOSSA explores the potential legal challenges GitHub Copilot faces regarding copyright infringement and license compliance of its code suggestions.https://fossa.com/blog/analyzing-legal-implications-github-copilot/
- General Data Protection Regulation (GDPR)Adopted by the European Union in 2016, GDPR is a global benchmark for data protection. It governs how organizations collect, use, store, and share personal data, with strict requirements for transparency, consent, data minimization, and accountability. For AI systems, GDPR mandates lawful data processing, limits on automated decision-making, and rights to explainability.https://gdpr.eu/what-is-gdpr/
- How IBM's Watson Went from the Future of Health Care to Sold OffSlate provides an in-depth look at the rise and fall of IBM's Watson Health, examining the overpromises and underdeliveries that led to its eventual sale.https://slate.com/technology/2022/01/ibm-watson-health-failure-artificial-intelligence.html
- IBM Bids Farewell to Watson Health AssetsThis article discusses IBM's decision to divest its Watson Health assets, including Truven, to Francisco Partners in 2022, marking a significant shift in IBM's healthcare strategy.https://www.mddionline.com/artificial-intelligence/ibm-bids-farewell-to-watson-health-assets
- IBM Watson Health Closes Acquisition of Truven Health AnalyticsThis press release from IBM details the 2016 acquisition of Truven Health Analytics for $2.6 billion, highlighting the strategic intent to enhance Watson Health's data analytics capabilities.https://www.prnewswire.com/news-releases/ibm-watson-health-closes-acquisition-of-truven-health-analytics-300248222.html
- IEEE Spectrum: In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online ConversationThis piece analyzes the vulnerabilities in Tay's design and the broader implications for AI systems interacting with the public.https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation
- IMD: Amazon’s sexist hiring algorithm could still be better than a humanIMD explores the complexities of AI bias, using Amazon's experience to discuss the challenges of creating unbiased AI systems and the importance of diverse training data.https://www.imd.org/research-knowledge/digital/articles/amazons-sexist-hiring-algorithm-could-still-be-better-than-a-human/
- Information Commissioner's Office: Google DeepMind and class action lawsuitThe ICO provides an overview of the legal proceedings and investigations into DeepMind's data-sharing practices with the NHS, outlining the implications for data protection laws.https://ico.org.uk/for-the-public/ico-40/google-deepmind-and-class-action-lawsuit/
- ISO/IEC 5230:2020 Information Technology - OpenChain SpecificationISO/IEC 5230:2020 is the international standard for open source license compliance in software supply chains. Also known as the OpenChain Specification, it defines the key requirements for establishing a quality open source compliance program, ensuring that organizations using or distributing open source software do so responsibly and legally. This standard supports transparency, traceability, and risk reduction across software dependencies—critical for M&A due diligence and SBOM integrity in AI acquisitions.https://www.en-standard.eu/bs-iso-iec-5230-2020-information-technology-openchain-specification/
- ISA/IEC 62443 – Industrial Automation and Control Systems SecurityDeveloped by the International Electrotechnical Commission (IEC), the IEC 62443 series provides a comprehensive framework for securing Industrial Automation and Control Systems (IACS), including Operational Technology (OT). It addresses system architecture, risk assessment, technical controls, and roles across asset owners, system integrators, and component suppliers. https://www.isa.org/standards-and-publications/isa-standards/isa-iec-62443-series-of-standards
- ISO/IEC 27001:2022 – Information Security Management SystemsThis standard specifies the requirements for establishing, implementing, maintaining, and continually improving an information security management system (ISMS). It helps organizations manage the security of assets such as financial information, intellectual property, employee details, and information entrusted by third parties.https://www.iso.org/standard/27001
- ISO/IEC 38505-1:2017 – Information technology — Governance of IT — Governance of dataThis standard provides guiding principles for members of governing bodies of organizations (which can comprise owners, directors, partners, executive managers, or similar) on the effective, efficient, and acceptable use of data within their organizations.https://www.iso.org/standard/56639.html
- ISO/IEC 42001:2023 – Artificial Intelligence Management SystemsThis international standard provides requirements and guidance for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It focuses on managing AI-related risks and opportunities using the Plan-Do-Check-Act methodology.https://www.iso.org/standard/81230.html
- NIST AI Risk Management Framework (AI RMF 1.0)Developed by the U.S. National Institute of Standards and Technology, this voluntary framework helps organizations manage AI risks. It promotes trustworthy AI through four core functions: Govern, Map, Measure, and Manage. The framework is sector-agnostic and adaptable to AI of all risk levels.https://www.nist.gov/itl/ai-risk-management-framework
- OECD AI PrinciplesAdopted by OECD member countries and partners, these principles promote innovative and trustworthy AI that respects human rights and democratic values. They provide guidance for policymakers and AI actors to ensure responsible stewardship of trustworthy AI.https://www.oecd.org/en/topics/ai-principles.html
- OpenChain OrganisationThe OpenChain Project is a global initiative under the Linux Foundation that supports trust and consistency in open source software supply chains. It brings together over 1,000 organizations to develop and share best practices, training materials, and tools that help companies manage open source compliance effectively. By promoting adoption of the ISO/IEC 5230:2020 standard, OpenChain helps organizations across industries ensure legal clarity, improve supply chain transparency, and reduce risk in the use and distribution of open source software.https://openchainproject.org/
- Privacy Act 1988 (Australia)Australia’s Privacy Act regulates the handling of personal information by Australian government agencies and private sector organizations. It includes 13 Australian Privacy Principles (APPs) covering transparency, consent, use, disclosure, and access to personal data. For AI, it governs the legality of data sourcing, biometric identifiers, and automated decisions with significant impact. Reform proposals are currently under review to strengthen enforcement and address emerging technologies.https://www.oaic.gov.au/privacy/privacy-legislation/the-privacy-act
- Reuters: Amazon scraps secret AI recruiting tool that showed bias against womenThis article reveals that Amazon discontinued its AI recruiting tool after discovering it favored male candidates, penalizing resumes that included the word "women's" and downgrading graduates from all-women's colleges.https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
- Scraping the Web Is a Powerful Tool. Clearview AI Abused ItThis article discusses how Clearview AI scraped billions of images from the internet without consent to build its facial recognition database, raising serious privacy concerns.https://www.wired.com/story/clearview-ai-scraping-web/
- TechCrunch: Google completes controversial takeover of DeepMind HealthTechCrunch reports on Google's full integration of DeepMind Health into its operations, a move that sparked debates over patient data control and the transparency of data usage agreements with the NHS.https://techcrunch.com/2019/09/19/google-completes-controversial-takeover-of-deepmind-health/
- TechCrunch: Microsoft apologizes for hijacked chatbot Tay’s ‘wildly inappropriate’ tweetsThis article discusses Microsoft's response to the incident and the lessons learned regarding AI deployment.https://techcrunch.com/2016/03/25/microsoft-apologizes-for-hijacked-chatbot-tays-wildly-inappropriate-tweets/
- The Register: Judge dismisses DMCA copyright claim in GitHub Copilot suitThis article reports on the dismissal of certain claims in the lawsuit against GitHub Copilot, highlighting the complexities of applying existing copyright laws to AI-generated code.https://www.theregister.com/2024/07/08/github_copilot_dmca/
- The Verge: GitHub Copilot lawsuit faces major setbackThis article discusses the class-action lawsuit against GitHub Copilot, alleging copyright infringement due to its training on publicly available code without proper attribution.https://www.theverge.com/2024/7/9/24195233/github-ai-copyright-coding-lawsuit-microsoft-openai
- Time: Microsoft Is Sorry for That Whole Racist Twitter Bot ThingThis article covers Microsoft's apology and the broader context of the Tay incident.https://time.com/4272822/microsoft-tay-twitter-bot-racist-ai-artificial-intelligence/
- The $500mm+ Debacle at Zillow Offers – What Went Wrong with the AI Models?This piece analyzes the shortcomings of Zillow's AI models, emphasizing the importance of model drift detection and the need for robust AI lifecycle governance.https://insideainews.com/2021/12/13/the-500mm-debacle-at-zillow-offers-what-went-wrong-with-the-ai-models/
- Wired: Why Google consuming DeepMind Health is scaring privacy expertsThis piece delves into the concerns of privacy advocates regarding Google's absorption of DeepMind Health, emphasizing fears about the potential misuse of sensitive health data.https://www.wired.com/story/google-deepmind-nhs-health-data/
- Why the iBuying Algorithms Failed Zillow, and What It Says About the Business World's Love Affair with AIThis commentary reflects on the limitations of AI in business applications, using Zillow's experience as a case study to illustrate the potential pitfalls of over-reliance on automated systems.Why the iBuying algorithms failed Zillow, and what it says about the business world's love affair with AI – GeekWire
- Wikipedia: Tay (chatbot)The Wikipedia entry provides a comprehensive overview of Tay's development, deployment, and the ensuing controversy.https://en.wikipedia.org/wiki/Tay_%28chatbot%29
- Zillow's Failed AI House Flipping Scheme: Impact on Real Estate MarketThis analysis explores the broader implications of Zillow's failed AI initiative on the real estate market, including the challenges of integrating AI into complex, real-world scenarios.https://www.toolify.ai/ai-news/zillows-failed-ai-house-flipping-scheme-impact-on-real-estate-market-1893693
- Zillow Reports $880M Loss on Failed Home-Flipping BusinessThis article details Zillow's financial losses from its home-flipping venture, highlighting the challenges faced by the company's AI models in accurately predicting housing market trends.https://globalpropertyinc.com/2022/02/13/zillow-reports-880m-loss-on-failed-home-flipping-business/
- Zillow Quits Home-Flipping Business, Cites Inability to Forecast PricesThis article discusses Zillow's decision to exit the home-flipping market, citing the unpredictability of forecasting home prices as a significant factor.https://www.foxbusiness.com/real-estate/zillow-quits-home-flipping-business-cites-inability-to-forecast-prices