Four Gen AI risks every business should know

5 najm
The shape of a brain, illustrated using a blue circuit board design on a black background.

According to a recent report by McKinsey, “Generative AI has the potential to change the anatomy of work” and will “substantially increase labor productivity across the economy”. At the time of writing, they have identified 63 Generative AI use cases spanning 16 business functions, which gives you some idea of the level of potential transformation. Of course, productivity, efficiency and effectiveness are the holy grails for organisations who want to achieve long-term success in a changing world, so this news has been met with great excitement.

However, amidst the excitement, one word rears its head time and again: risk. Generative AI is still in its very early days, after all. And so, businesses need to think carefully about how it is deployed in their organisations – and what the potential outcomes could be. “This comes down to risk tolerance and risk appetite,” says Quentyn Taylor, Canon EMEA’s Senior Director – Information Security and Global Response. “For example, are we willing to accept an AI responding to customer queries, when it will very occasionally get it wrong?” This is just one way in which GenAI could be used to increase efficiency, but as Quentyn points out, what is the reputational risk when a customer receives incorrect information? It very much depends on the product and the customer, and this is the big challenge for businesses – having a clear understanding of the places where Gen AI can add true value, and those where there is no level of acceptable risk. In order to even begin to make these judgements, it’s therefore critical to understand just what the risks could be.

Protecting Intellectual Property and commercially sensitive information

This is the area which has been most immediately addressed by most organisations, with some putting a blanket ban on the use of any Generative AI tools and services to protect and preserve their corporate privacy. Essentially, everything you enter into a GenAI model becomes training data. So, if you were to ask it to write a speech for an exciting product release, supplying all details of that product in the request, then you’ve essentially just uploaded business critical, embargoed information to a globally used tool. If your business lives and dies by its IP, then this is a level of unacceptable risk. “On the other hand,” says Quentyn, “if you used Generative AI to write 300 slightly varying descriptions for existing products, is that a problem? Probably not.” Another perspective to consider is the effort of policing the issue versus outcome: “Is stifling the use of GenAI an effective use of our time? Can we fully block access when there are thousands of new tools being released every day?” he asks.

Two people sit at their desk in an office, facing each other but both looking at big screens as they work on their computers.

When your teams use Generative AI to help them with their day-to-day tasks, it’s business critical to ensure that they don’t upload commercially sensitive information.

Decision-making, bias and communication

Who’s in charge? Well, of course decision-making starts at the top, but when the board is drawing conclusions with the help of Generative AI tools, the process has to be crystal clear. Equally, it’s vital that bias is taken into consideration when using GenAI as a tool through which to analyse options for increased productivity and profitability. It’s widely understood that training data must be vast for an AI model to be even remotely fair, but even so, bias exists. This is why many organisations choose not to use such tools in key decision-making areas, hiring often being cited as a problematic area. Quentyn underscores this, saying “You must always understand the context in which any AI-supported decisions are made,” says Quentyn. “And this must be clearly communicated to the rest of the organisation, otherwise you risk creating widespread confusion and mistrust in leadership.” This is particularly important when you consider how often organisational decisions require ‘unpicking’, to understand the often very nuanced basis upon which actions are mandated.

Copyright infringement

Right now, there are some very high-profile lawsuits underway where parties believe their creative work has been used to train an AI without their consent. And there are few graver concerns around Generative AI than those around the legality of the content it creates. Yes, there are some incoming tools (such as Adobe Firefly) which are trained on nothing but legally owned data, but for others, there is little to no clarity right now as to how safe they are to use for, say, creating a suite of images for a social media campaign or designing a new brand identity. When working with third parties on such activities, Quentyn sees value in “adapting or updating contracts to mitigate the risk and making sure that clear guidance and policy is in place internally”.

When GenAI tells lies

You might have heard the term ‘hallucinations’ referenced fairly frequently in the context of GenAI. In simple terms, it is when an AI model generates a false or unrealistic response. It could be something silly, like a completely made-up word or nonsensical sentence. Or it could confidently provide a false piece of information, which is what happened to two lawyers who presented six case citations in court that turned out to be completely fictitious. These were later found to have been generated by ChatGPT and the lawyers were ordered to pay a $5000 fine. AI experts acknowledge the issue and are ‘making progress’, but in the meantime this is an area of exceptional risk for organisations and their leaders. “Fact check, fact check, fact check,” stresses Quentyn. “That's one of the key roles for humans when we use AI to generate underlying content. We must be scrupulous and effective editors.” He also warns of the risks of using Gen AI bots to monitor and respond on social media. “Your bot could begin to give answers that, in theory, may well be correct – just not in the context for the question they were given.”

Overall, Quentyn feels positive that many organisations will adopt AI as a “background benefit”, built into and offered through the services and tools they already use. In this respect, much of the risk is already mitigated through contracts with third parties and by deploying solutions from known, respected and proven partners. “Perhaps a company might use AI to search through emails, looking for phishing scams,” he explains. “That's not something they are likely to specifically code or develop themselves, but they will gain benefit from using tools that include this service.” Ultimately, any business involves a significant amount of risk management, and the new opportunities presented by Generative AI are no different in this respect.

Related