Companies are developing artificial intelligence (AI) capabilities that are evolving faster than regulations can keep up with, which is leaving investors exposed to growing risks.
This is according to Federated Hermes EOS Asia and emerging markets head Ross Teverson, who said that companies need to proactively address the risks that evolving AI generates.
“The technology itself has evolved quicker than even many of its developers envisaged it would, and that is presenting new challenges,” he said.
“AI capabilities are evolving faster than regulation can keep up with, so it’s really important that companies are pre-empting some of the risks that evolving AI generates.”
Failure to do so could lead to potential reputational damage, regulatory liabilities and value destruction for shareholders, he warned.
Teverson (pictured) stressed the importance of investors asking companies the right questions regarding their AI development and demanding that companies themselves ask the right questions internally.
“Those questions are not just around having the right AI principles in place, or having the right governance structures, it’s also about the way in which AI deployment works within a company,” he said.
The importance of internal oversight
“It is important that the person who has final say on how AI is deployed within a business is not the same person who is primarily incentivised by revenue and profit generation,” Teverson said.
One way companies can mitigate the potential business risks posed by AI is to establish internal technology ethics committees, he suggested.
“It should be a concern for investors if the final say on AI deployment is being driven by a divisional head who is incentivised by growing revenue and profits,” he warned, drawing parallels with health and safety.
“The oversight of health and safety needs to be strongly incentivised in a way that is separate to the revenue and profit performance of the division, because the two can often be in conflict,” he explained.
“There may be a temptation to deploy and roll out a new technology as rapidly as possible, particularly in a very competitive sector, where other companies are also deploying AI at pace.”
“In order to offset some of the risks created by that competitive pressure, oversight needs to come from within the business from someone who is not primarily incentivised by that revenue and profit.”
One potential area of concern Teverson flagged is in the potential use of personal data: “If through interacting with an individual an AI system can make inferences, which don’t necessarily constitute personal data under the law, but which could be used in a similar way to personal data for conducting further analysis and reaching conclusions about individuals, then the oversight of a technology ethics committee can ensure that a company decides to act in a way that maintains end user trust and would also be viewed as operating in the spirit of regulation.”
Workforce risks
However, the risks associated with AI deployment extend beyond regulatory damage, Teverson warned.
He said companies need to be more aware of how AI impacts their workforce to maintain good employee relations and reduce business risk.
Failure to do so could lead to significant disruptions to their business and indeed potentially the wider industry, with the recent writers’ strike in the US being a good example.
“Companies should be considering the long-term impact of AI on their own workforce, looking to reskill and retrain employees where relevant and to disclose the actions that they’re taking,” he said.
“That’s another important element of the human capital management impact of AI: whether or not companies are predicting what will change in terms of the way their workforce is impacted.”
This is why his firm is asking companies to disclose how AI is impacting their workforce and the steps they are taking to manage this.