What makes for an effective AI policy?

June 26, 2024
Please accept marketing-cookies to watch any videos on this page.

Nash Squared CIO, Ankur Anand, looks at what different components you need for a good AI policy. This article first appeared on boardagenda.com.

Board ownership of an organisation’s policy on using artificial intelligence is essential for effective governance of the technology.

As artificial intelligence and, more lately, generative AI (genAI) become more widely deployed across businesses, we are in the middle of a remarkable era of innovation and possibility. The potential of AI to boost productivity, facilitate problem-solving and enhance creativity is probably higher than anyone imagined.

However, to make the most of that potential, and to successfully bring human intelligence and artificial intelligence together in a healthy balance, guidelines and guardrails are needed. People can’t be left to use AI at work in whatever way they feel like without some clear guiding principles to follow – as well as, where necessary, red lines that should not be crossed.

Part of the governance framework

An AI policy is a foundational element of a successful approach to AI. As AI becomes more ubiquitous, an AI policy becomes part of the governance framework that any organisation functions by. A policy is not merely a ‘nice-to-have’—it is becoming table stakes in the new age we have entered.

Almost three-quarters of organisations have deployed generative AI to at least some extent.

Encouragingly, our research at Nash Squared shows that organisations are taking this on board and acting accordingly. Six months ago, in our annual Digital Leadership Report which surveys technology leaders around the world, only one in five organisations had an AI policy in place. But in a Pulse survey of tech leaders that we conducted recently, that has now doubled to 42%, while a further 38% have plans to create one.

That is a striking increase in a short period of time—and is a testament to the speed at which AI is developing and being adopted. Our survey found that almost three-quarters of organisations have deployed genAI to at least some extent to their employees and one in five have deployed it enterprise-wide.

Elements of an effective AI policy

A good policy will set out clearly what AI can be used for, the protocols and checks and balances that should be followed (such as having a human in the loop and the need for human review before any outputs are used or published), and the ethical principles that should dictate its use—such as transparency, accountability and fairness.

There should be control mechanisms to guard against algorithmic bias, while the data ingested into AI applications needs to be of requisite quality and accuracy.

Security is another key concern: there must be clear guidelines to prevent commercially sensitive information being released into the public domain through (certain) genAI platforms, while, needless to say, respecting data privacy and protection rules is also paramount.

There may also be intellectual property and copyright issues that come into play. Monitoring systems are needed to track the use of AI and maintain records for compliance and auditing purposes.

More broadly, an AI policy can also help the organisation with its sustainability and ESG goals—for example, by evaluating the sustainability practices of AI vendors and by embracing use cases that align with the organisation’s ESG ambitions including sustainability given that AI consumes significant energy. Companies should build a plan to stay focused on carbon neutrality while they continue to roll out AI.

For all of these reasons, an AI policy is essential—and it needs to be clearly communicated and actively discussed across the business, rather than passively published in a quiet corner of the intranet.

Done well, an AI policy can have a double benefit: not only will it reduce risk and help employees use AI safely and productively, but it can become an effective educational tool by facilitating discussion about AI and helping staff understand how AI can support them in their roles. This will increase confidence and adoption.

Owned from the top

Ownership is another key question. Operationally, the AI policy may often be owned by a technology leader such as the CIO or perhaps by HR—but in my view they are more akin to the custodians of the policy.

AI has become so strategically important that it should be owned at the very top, by the executive committee and/or even the CEO personally. It is their buy-in, engagement and sponsorship that sets the tone and establishes the culture to embed AI within the way the business operates.

The board has a crucial role to play. Some organisations have established AI committees made up of senior individuals, or ethics boards for whom AI is a significant part of their remit. Our Pulse survey shows that a small proportion of businesses (5%) have also appointed a chief AI officer, with a further 7% planning to appoint one.

Whatever the governance structure at a specific organisation, non-executive directors can play an important role in the oversight of AI within the business—strongly encouraging the creation of an AI policy if one is not already in place, reviewing the effectiveness of the policy once published, and ensuring the policy remains aligned with the wider ethical values and social responsibility commitments of the business.

Directors, both executive and non-executive, also have a particular responsibility to ensure that their own use of AI tools and applications, personally and within their teams, is in line with the policy and is ethical, safe and secure. Arguably, there is a shortage of specific training for executives in this area—an area where we may see market developments moving forward.

Is your organisation up to speed?

An AI policy doesn’t solve all problems or guarantee success—it is notable that the same proportion of Pulse survey respondents (four in ten) from organisations with an AI policy as those without one are concerned about the risk of AI misuse.

Many organisations have also retrofitted their policy after AI usage has already begun among staff—an inevitable reality given the easy availability of genAI platforms such as ChatGPT, Gemini and Copilot.

The policy should be regularly reviewed—possibly even monthly, given how fast the AI landscape is evolving. Just as AI evolves (and potentially mandatory regulatory requirements are introduced), so will the policy.

If your organisation does not yet have an AI policy, is that a defensible position and should you be strongly advocating for one to be created? If your organisation does have one, is it fit for purpose and providing sufficient support and guidance for staff? These are now key questions as AI becomes one of the defining characteristics of our times.

More from across Nash Squared

We're ready to help you build a limitless future