More needs to be done to address the lack of skills and resources around AI integration and security, said Maria Markstedeter, CEO and founder of Azeria Labs, at the recent Dynatrace Perform in Las Vegas. told the audience at the 2024 conference.
To counter the risks posed by new innovations such as AI agents and composite AI, security teams and data scientists need to improve communication and collaboration.
Markstedter experienced the frustrations of lack of resources from his experience reverse engineering ARM processors, and believes better collaboration and understanding is needed to minimize the threats posed by AI integration.
“You can't find vulnerabilities in a system that you don't fully understand.”
The size and complexity of data processed by AI models is increasing, pushing threat modeling beyond the capabilities of security teams, especially when security professionals lack the resources to understand the threats. .
New attacks and new vulnerabilities “require data science and an understanding of how AI systems work, but they also require that.” [have] It requires a very deep understanding of security and threat modeling and risk management,” Markstedter said.
This is especially true for new multimodal AI systems that can process multiple data inputs such as text, audio, and images simultaneously. Markstedter points out that although unimodal and multimodal AI systems can process very different data, the general call-and-response nature of human-AI interactions is largely the same.
“The nature of this transaction is not the silver bullet we were hoping for, and this is where AI agents come into the picture.”
AI agents offer a solution to this highly transactional nature, as they essentially have the ability to “think” about the task and derive their own end results depending on the information available at the time. To do.
This poses an unprecedented and significant threat to security teams. “We are essentially entering a world where we have a lot of business data and non-deterministic systems that have access to it, so we have to re-evaluate the concepts of access and identity management.” has the authority to manage and perform non-deterministic actions. ”
Markstedter said that because these AI agents need access to internal and external data sources, there is a significant risk that these agents will be provided with malicious data that may appear benign to security assessors. It is claimed that there is.
“With multimodal AI, processing this external data becomes even more difficult. Malicious instructions don't have to be part of the text on a website or an email; they can be hidden inside an image or audio file. Because it’s possible.”
However, it's not all bad news. The evolution of complex systems that combine multiple AI technologies into her single product will allow her to “create tools that provide a more interactive and dynamic analytical experience.”
Combining threat modeling with complex AI and encouraging security teams to work more closely with data scientists not only significantly reduces the risks posed by AI integration, but also strengthens the skillset of security teams. can do.