The issue
During the course of 2023, as commercial developments in AI continued to gather pace, governments and companies sought to keep up with, and strike a balance between, the technology’s commercial and societal risks and its opportunities.
We were interested to understand how our investee companies (in particular, those in the banking sector) were protecting their customers from potential malicious uses of AI against their security systems.
As a starting point, we wanted to understand how synthetic voice content created through generative AI could be used to manipulate voice recognition security systems of banks.
Activity
This is an emerging risk, and we wanted to gain a foundational understanding of the actual threat and the steps our investee companies in the banking sector were taking to protect themselves. We reached out to a small sample of four investee companies held within our equity portfolios, initially via email, requesting a discussion with their investor relations teams on the issue.
One bank failed to respond, one responded by email, and two responded with a meeting, allowing us to discuss both their assessment of the risk and their management actions.
Outcome
Of the three banks who responded, we heard that the banks are aware of the risk and were in most cases working with technology suppliers to manage security vulnerabilities.
Some banks relied upon their multi-factor authentication processes to minimise the exposure to voice authentication. Understandably, all banks were reluctant to give details of the precise security vulnerabilities.
The response suggests that the banks we communicated with recognise the risks presented by AI to voice authentication security and are working to manage the risks that may currently exist.
Given the pace of change, and as the use of AI becomes more widespread, this is likely to become a pervasive threat that both we, and our investee companies, will need to develop our abilities to identify, assess, and manage in the short term.