AI on Boards: What happens when board members legitimately have no idea?

Kayla Schembri
BBA | RIMS-CRMP | IBDC.D | CIN | GIA(Affiliated)
August 13, 2024
Read Time
15 minutes

Key Considerations

·      What binding obligations do board members have to keep their skills current?

·      Why (and how) are artificial intelligence (AI) models being appointed to boards around the world?

·      What are board members doing in the face of technology outpacing legislation?

·      What does this mean for the future of boardrooms?

Currentness of Board Members’ Education

Do board members have an obligation to keep their education and knowledge current?

Ethically, sure. But it is not enforced.

Is that a problem?

Well, consider this: The average age of board members across the LSE (UK), ASX300 (AU), TSX (CA) and Fortune 500 (USA) starts at 60.5 years old. For those who followed the linear career trajectory of undertaking tertiary education as soon as their age permitted, those degrees would have typically been completed in their early 20s. That information taught and learned has now aged some ~40 years. Sure, some professions and practice restrictions impose requirements for some form of continuing professional development, but these are predominantly restricted to their original qualifications, not broader skills or expertise relevant to changing operating environments. For example, an accountant is not required to learn AI models in order to maintain their practicing certificate, but what about the Chief Financial Officer who can sit on a board and influence consequential decisions? Or a lawyer in practice vs General Counsel on a board?

Point to consider: The degree that I (and anyone else) completed in 2023 contains information some ~39 years’ more current than that of their colleagues. Do not underestimate the utility of later-in-life/later-in-career degrees, or the expertise of younger generations.

According to the Corporate Governance Institute in Ireland, less than 1% of board members and company directors (globally) are certified in governance, much less tertiary educated. From personal experience, I can concede there are limitations on availability. As a current student at Curtin University in Western Australia undertaking a Master of Corporate Governance and Leadership, I am given to understand that semester one of 2024 was the inaugural commencement of such a program. Personally, I am looking towards the governance authorities and industry regulators across different regions to consider exploring how such a mandate for meaningful currentness of skills for board members would take shape.

All things considered, you can see the appeal of AI models given that they, by design, leverage the most current information at any given point in time, right? So, how does this play out when they are invited into a boardroom?

Country Firsts: Boards of Directors with AI

I found the following examples wherein boards have “appointed” AI models onto their boards of directors:

·      HK – May 2014 – Deep Knowledge Venture appointed Vital AI

·      UK – October 2023 – IKAR Holdings appointed IKONIC

·      UAE – March 2024 – International Holding Company appointed Aiden Insight (AI)

·      AU – May 2024 – Real Estate Institute of NSW appointed Alice Ing (AI)

The Australian iteration has been criticised by a federal government official as a PR stunt, which I am inclined to agree with. Interestingly, the media release from the company’s CEO, Tim McKibbin did little to dispel such criticism, having erroneously declared:

“With an IQ of 155, Alice is the world’s smartest Board Advisor [sic]"

As a member of Mensa (“the world’s largest, oldest and most famous high IQ society”), I can personally attest that this statement simply is not true. The irony is that us Mensans also have the sense to know that IQ is neither a sole predicator of “smartness”, nor a meaningful indicator of aptitude or utility in general (let alone in a boardroom). If the capabilities are not even properly understood from the outset, what hope is there of them providing any value? More importantly, what is to stop them doing damage?

While there are certainly benefits to be reaped from any emerging technology (it is why it comes into fruition, after all), we need to fully understand the risks to manage them. IBM published a useful resource setting out various considerations on AI “hallucinations” wherein results return incorrect, incoherent or sometimes even dangerous information. Add in the uptake of malicious actors deliberately polluting results and trying to break them, then add in some satire and sarcasm that AI does not yet understand and you could have a serious problem. A recent report by McKinsey & Company has published some interesting data on the prevalence of organisations experiencing negative consequences from AI inaccuracies:

Source: McKinsey Global Survey on AI, February 2022 - March 2024

So, why are AI models currently occupying seats as board members? Hint: they aren't.

Once we peel back the media sensationalism and take a closer look at the specifics of the above examples, we can see that they aren't. There is publicly available information for most of these organisations confirming in the minutiae that the AI models are not appointed in fully fledged, responsible board member capacities complete with voting rights and fiduciary duties – the manner we naturally think of when we hear “board members”. The models are just present in advisory or non-voting capacities – not dissimilar from a consultant of sorts, or an ex-officio member.

Naturally, there would be so many unknowns in appointing so-called “robo-directors” into decision-making capacities. The ones that are front of my mind:

  • How could an AI model meaningfully demonstrate that it had fully discharged its fiduciary duties, or that it had “acted” with due diligence, care and attention, as a human board member needs to?
  • Would professional indemnity and public liability insurance policies respond to losses incurred because of poor AI-led decisions?
  • Where does the threshold sit in terms of an AI model “passing” in an equivalent decision-making capacity as that of a “natural person” per interpretations legislation? Philosophically speaking, can we fully conceptualise where that line sits between us and them?
  • If so, and if the time comes, how would an AI model be held accountable?

A recent article by the Governance Institute of Australia queried whether robots could soon be running companies. This seems pretty far away in my mind. Results returned by AI models (to my understanding) are also point-in-time or retrospective, it cannot yet reason and forecast and interpret and fill gaps in understanding to project complex decision-making into the future like “natural person” board members can – yet. It certainly cannot predict black swan events when relying on existing training data sets. And, like any tool or information being relied upon as the basis for a strategic decision, it needs to be considered in context of various lens magnifications instead of in isolation.

What guidance does the legislation provide?

Like most matters technology-related, the world looks towards the European Union. On 1 August 2024, the EU’s Artificial Intelligence Act came into effect, as the world’s most comprehensive set of legislative provisions for the use of AI. Earlier this year, after a mammoth international policy undertaking, the United Nations General Assembly (UNGA) also passed the first “landmark” global resolution pertaining to AI, with particular considerations for human rights, protection of personal data, and the risk management of broader AI usage.

While these are undoubtedly good news stories, I draw your attention to the timeliness.

Long before the advent of Chat GPT (being the large language model generally thought of when we talk about AI) in late 2022, AI has existed since the 1950s. Consider that in the scheme of things – this legislation took effect two weeks ago, 70-odd years later. Even Europe (as the “world’s most aggressive tech regulator”) does not have all the answers, not to mention the implementation runway of several years. The UNGA resolution is non-binding. As at the date of publication, Western Australia and South Australia still have not enacted state-specific privacy provisions (despite same being in consultation for several years now), let alone anything closer to contemplating these more sophisticated and contemporary issues. There are so many unknowns.

From a risk management perspective, it is simultaneously the earliest of days, and years too late.

Then, from where do we seek expertise?

Common threads of discussions on technology include generally-understood concepts like the need to “future proof” strategies and workforces and to obtain “cutting-edge technology” – but where are these skillsets coming from? I put a similar question to some of the world’s leading minds during a Civil Society Policy Forum: Bridging the Digital Divide amidst the Spring 2024 meetings of the World Bank and the International Monetary Fund convened in Washington D.C. Naturally, I do not recall there being a clear answer. Similar to the availability of tertiary governance education, but for different reasons, issues with availability and accessibility also exists in the tech space. Anecdotally, my colleagues far more steeped and experienced in that sector speak to the issues of keeping pace academically in instances where advancement is so rapid that information administered in first-year tertiary education is obsolete before graduation. How do we procure talent with the required skill sets, when said skills have yet to be taught? 

Ultimately, “How do we govern something we do not fully understand?”

Arguably, there is no single source of truth for this when looking externally, so its no wonder so many are left scratching their heads when it comes to governing AI. Answers are limited when looking to legislation for legal guidance where it does not exist because it cannot keep pace. Answers are limited when looking to academia for research outcomes that are still retrospective. Answers are limited when looking to individual experts in a landscape where technological advances outpace the administration of education. Reflecting on this set of circumstances had me wondering whether this is how world leaders felt during the onset of the COVID-19 outbreak. Anyone can be a critic with the benefit of hindsight – but who were the decision-makers on the ground at that very time, with limited information and no clear way to obtain it?

This is undoubtedly a wicked problem, and it would be remiss of me (or anyone at this point in time, really) to proclaim to have all the answers. But as board members at the helm here and now, what systems do we rely on in the meantime when the external information simply does not exist? Is it a moral compass? Ethical frameworks and thought exercises? Trusting our ethereal, gut-feel?

Now more than ever, I think board members ought to reflect on what is within our control, as well as the currentness of our expertise. I consider there to be a moral duty to actively commit to continuing learning from a variety of sources, structured or otherwise. While mandating any such requirement above and beyond that of corporate social responsibility would be difficult to scope (much less enforce), we must start somewhere and continue these acknowledgements. I do not think we will get anywhere if such unknowns are kept under the rug. Sure, the bar is set high in terms of things board members are expected to know (increasingly so), but we need to consider what damage is inadvertently being done by not admitting when we do not have all the answers.

My additional questions to you:

As a board member, can you put your hand on your heart and truthfully say you are doing everything in your power to keep your knowledge, skills and insight, current? 
If the non-imitable 'value-add' of a human lens is being the final decision-maker on AI-assisted arguments, how are you future-proofing yourself to do this correctly?

Let’s Connect
Fill out the form below and I’ll be in touch shortly.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.