Insurers eye AI's dual role in risk and opportunity
AI vendors can become single points of failure.
The rapid adoption of artificial intelligence (AI) arises from both malicious and accidental sources. Four primary factors contribute to this risk: software supply chain threats, expansion of the attack surface, increased data exposure, and growing usage in cybersecurity operations, according to Guy Carpenter.
AI deployment presents a software supply chain risk. Businesses using third-party AI models, such as ChatGPT or Claude, face potential vulnerabilities when these external models are compromised. AI vendors can become single points of failure, as seen in instances where ChatGPT suffered outages in 2023 and 2024, impacting thousands of users.
A December 2022 attack on PyTorch, a machine-learning library, led to 2,300 malicious downloads. Similarly, over 100 malicious AI models were discovered on HuggingFace, a well-known AI repository.
Once deployed, AI models also create new attack surfaces. Users interact with models through inputs and outputs, which can be manipulated.
Techniques such as "jailbreaking" trick models into unintended behaviour, potentially causing data exposure, network breaches, or liability due to incorrect outputs. In February 2024, attackers exploited a vulnerability in an open-source library to steal information from ChatGPT users.
AI’s growing role in business operations and product innovation is expected to reshape the insurance landscape.
Guy Carpenter advises Insurers and reinsurers to treat AI not just as a risk but as an opportunity for growth. Like the transition to cloud services, AI adoption involves greater reliance on third-party providers. To manage these risks, insurers should gather detailed data on AI model development, deployment, and testing, focusing on confidentiality, integrity, and availability.