Bridging the AI Gap

What Australian Boards Need from Technology Leaders

Artificial Intelligence (AI) and Large Language Models (LLMs) have rapidly emerged as transformative forces in business. They are reshaping operations, decision-making, customer engagement, and even redefining competitive advantage. For technology leaders, AI represents both an opportunity to enable innovation and a complex risk domain that must be governed with diligence. As boards of directors increasingly seek clarity and assurance around AI adoption, technology leaders must be prepared to articulate not only the threats, but the strategic enablers of secure, responsible AI use.

This article extends the principles discussed in "Leading AI and LLM Adoption Through Cybersecurity" by focusing on the role of technology leaders in engaging with the board. It explores the key areas of knowledge, strategic alignment, and communication necessary for technology leaders to build trust, shape governance, and support innovation with AI.

Translating AI Risk into Board-Level Language

Boards are focused on strategy, risk, and performance. Technology leaders must be able to explain AI-related risks in terms directors understand:

  • Operational Risk: Explain how AI misuse or failure could disrupt services, reduce resilience, or introduce dependency risks.
  • Legal and Regulatory Exposure: Highlight emerging global AI laws, including transparency, explainability, and privacy requirements, and what non-compliance might mean.
  • Reputational Harm: Discuss how AI-driven decisions (e.g., biased outputs or hallucinations) can damage trust with customers, investors, and the public.

Under section 180 of the Corporations Act 2001 (Cth), directors have a duty to exercise their powers and discharge their duties with the care and diligence that a reasonable person would exercise. This includes understanding and managing AI-related risks. A failure to do so could amount to a breach of directors' duties.

An example is the 2023 case involving Clearview AI, where the Office of the Australian Information Commissioner (OAIC) found the company in breach of Australian privacy law for scraping biometric data. Directors need to be aware of similar reputational and compliance exposures from AI applications. (OAIC decision summary)

Boards appreciate structured, prioritised risks with clear mitigation strategies. Use existing enterprise risk frameworks to position AI risks in a familiar format.

Framing Technology as an Enabler of AI, Not Just a Tool

Too often, technology is perceived solely as a functional or support capability. When discussing AI, shift the narrative:

  • Secure-by-Design AI: Emphasise that security, privacy, and ethical use are built into AI from day one—not retrofitted.
  • Faster Adoption with Fewer Setbacks: Show how early technology involvement prevents rework, accelerates implementation, and ensures compliance.
  • Strategic Differentiation: Position trust and capability as market differentiators. Boards want to hear how well-governed AI can increase stakeholder confidence.

A pertinent example is Telstra, which has actively integrated AI into its customer service operations. In 2024, Telstra expanded two in-house developed generative AI solutions—One Sentence Summary and Ask Telstra—after successful pilots with frontline team members. These tools, leveraging Microsoft's Azure OpenAI capabilities, enable faster and more effective customer interactions by summarising recent customer history and facilitating quick access to internal knowledge bases. Importantly, Telstra has embedded AI governance, ethical use, and privacy considerations into its AI deployments, ensuring responsible and secure development of these solutions. (Telstra Media Release)

Supporting AI Governance and Ethical Decision-Making

Boards need reassurance that AI systems won’t introduce hidden bias, drift, or unchecked decision-making:

  • Governance Models: Recommend establishing cross-functional AI governance committees, with technology leadership as a core contributor.
  • Ethical Use Oversight: Advocate for policies that address fairness, transparency, and accountability.
  • Auditability and Explainability: Propose metrics and systems that allow AI decisions to be explained, traced, and challenged.

An example of the risks posed by AI in decision-making processes is the controversy surrounding GitHub Copilot, an AI-powered coding assistant developed by GitHub and OpenAI. In 2022, a class-action lawsuit alleged that Copilot reproduced open-source code without proper attribution, violating software licenses and developer rights.

However, in 2024, a U.S. District Court judge dismissed the majority of these claims, including those involving copyright infringement and breach of license, citing insufficient legal standing and failure to demonstrate specific harm. While the case did not proceed to trial, it drew significant attention to the unresolved legal questions surrounding generative AI and highlighted the lack of regulatory clarity in this space.

Boards should see this as a signal to implement comprehensive governance policies for generative AI, especially where outputs may intersect with intellectual property or open-source compliance. (Legal.io article)

Helping Directors Stay Ahead of the Regulatory Curve

Directors have legal duties to act with care, diligence, and in the best interests of the organisation. As AI regulation matures, boards will be held to higher standards:

The Office of the Australian Information Commissioner (OAIC) has also issued specific guidance on privacy and generative AI. This guidance outlines key privacy risks during the training and deployment of large language models (LLMs), including data minimisation, consent, and data provenance. Boards should ensure their organisations are aware of and act in accordance with the OAIC’s recommendations, particularly when generative AI tools use personal or sensitive data. The guidance reinforces that even where data is publicly available, it may still be subject to privacy obligations under the Privacy Act 1988 (Cth). (OAIC Generative AI Guidance)

  • Global Momentum: Brief the board on developments such as the EU AI Act, U.S. Executive Orders, and Australia’s emerging stance.
  • Pre-emptive Compliance: Recommend adopting frameworks like ISO/IEC 42001 (AI Management Systems) and mapping to existing standards like ISO/IEC 27001.
  • Technology and AI Synergies: Show how privacy, data protection, and system integrity controls can form the backbone of AI compliance.

In a 2023 media release, ASIC warned that many organisations face a "governance gap" in their adoption of AI technologies. The regulator emphasised that directors are accountable for understanding and managing the risks associated with AI systems, including issues of bias, transparency, and data security. This aligns with the broader duty under the Corporations Act for directors to act with care and diligence, particularly when deploying emerging technologies with potential systemic impact. Boards must therefore ensure they have the appropriate oversight and reporting mechanisms in place to avoid compliance failures and reputational damage. (ASIC statement archive)

Educating the Board Without Overwhelming Them

Most directors are not AI experts. Technology leaders should:

  • Demystify Key Concepts: Use plain language to explain model training, data dependencies, and hallucinations.
  • Highlight Use Cases: Describe real examples of AI use within the organisation, from chatbots to fraud detection.
  • Promote Continuous Learning: Encourage boards to invest in structured AI learning sessions and include AI in board development agendas.

Consider providing short, focused briefings or curated reading lists to support director education. For example, Microsoft’s AI learning hub or the AICD’s AI governance checklist are both accessible and relevant resources for directors.

Ensuring AI Projects are Aligned to Strategy and Risk Appetite

Directors are concerned when AI projects appear ad hoc, risky, or disconnected from enterprise goals:

  • Map Projects to Business Strategy: Help directors see how AI investments support growth, efficiency, or resilience.
  • Clarify Boundaries: Recommend clear guidelines on which functions or data types should not be subject to AI without elevated controls.
  • Integrate AI Into Risk Appetite Statements: Work with risk leaders to include AI within enterprise risk tolerances.

A useful reference point is Commonwealth Bank’s recent initiative to embed AI into its customer service and banking operations as part of a broader digital transformation strategy. According to CBA’s 2024 update, the bank is reimagining banking by deploying AI tools to streamline customer experiences, personalise services, and boost productivity across frontline and back-office functions. Importantly, these initiatives are framed within a strong ethical and governance framework, including a dedicated AI Centre of Excellence and clear principles for responsible AI use. This reflects how a large financial institution can align AI adoption with its strategic imperatives while maintaining oversight and trust. (CBA newsroom)

Building Trust Through Transparency and Proactive Engagement

Trust between boards and technology leaders is earned through transparency and clarity:

  • Provide Reporting and Metrics: Create dashboards that show AI use, risk levels, and mitigation efforts.
  • Be Honest About Unknowns: Acknowledge areas where AI is still evolving, and regulatory expectations are unclear.
  • Engage Early and Often: Ensure technology leaders are embedded in AI discussions from inception, not as a final review.

Using Metrics and Dashboards to Drive Transparency and Oversight

Boards increasingly rely on metrics to monitor performance, assess risk, and make informed decisions. AI adoption is no different. Clear, consistent reporting mechanisms help directors understand how AI is being used, where risks exist, and what controls are in place. Technology leaders can build trust by providing structured oversight tools tailored to the board’s strategic lens.

Key Metrics to Consider:

  • AI Usage MetricsVolume of AI deployments, types of models in production, business units using AI, and dependency on LLMs.
  • Risk and Control MetricsNumber of AI systems reviewed for bias, explainability, and privacy compliance, percentage of models subject to ethical review or technical audit.
  • Compliance MetricsAlignment with regulatory frameworks (e.g., OAIC guidance, Privacy Act, ISO/IEC 42001), training completion rates for responsible AI policies.
  • Performance and Benefit MetricsTime savings, process improvements, customer satisfaction scores, or revenue impact resulting from AI implementations.

Dashboards for Board-Level Visibility:

Dashboards should be simple, visual, and aligned to enterprise risk and performance frameworks. They can include:

  • Heatmaps showing AI risk exposure by business unit
  • Traffic-light indicators for regulatory compliance
  • Trends in AI incidents or remediation efforts
  • AI project maturity assessments linked to strategic goals

Boards should be offered regular dashboard updates and the opportunity to request deep dives into areas of concern. These tools foster proactive governance, demonstrate due diligence, and ensure that AI adoption aligns with risk appetite and business strategy.

Conclusion: Boardroom Leadership in the AI Era

AI is not just a technology issue; it is a strategic business issue. Technology leaders are uniquely positioned to shape secure and responsible AI adoption, but this requires strong communication and alignment with the board. By translating complex risks, proposing clear governance measures, and supporting continuous education, technology leaders become trusted advisors and innovation enablers.

In a world where AI-driven decisions affect reputation, compliance, and competitive edge, boards will increasingly rely on technology leaders who can connect the technical to the strategic.