AI technology has become increasingly essential in today’s business landscape. It is being utilized across various domains such as marketing, design, product development, data science, operations, and sales. Moreover, its applications span beyond the enterprise, with notable projects in vaccine development, cancer detection, resource optimization, and more. However, with each use case comes unique security risks, particularly in terms of privacy, compliance, and the protection of sensitive data and intellectual property.
“Organizations need to plan every generative AI project, in the enterprise and out in the world, not just for current risks, but with the future in mind,” says Vijoy Pandey, SVP at Outshift by Cisco.
One of the primary challenges is the evolving nature of generative AI risks. For example, phishing attacks have become increasingly sophisticated, incorporating deep fakes to deceive individuals. Additionally, users often input sensitive information into generative AI models, raising concerns about data privacy violations. Keeping up with new regulations that address these risks is crucial.
Another significant concern is the vulnerability of the generative AI pipeline itself. Fraudsters can exploit this vulnerability at various stages, resulting in compromised models and inaccurate predictions. This can have detrimental consequences for both customers and internal business users who rely on generative AI tools.
“We’re looking at issues like data poisoning attacks and model inversion attacks and more, and detection is the primary issue,” Pandey explains. “We all rely on confidence intervals and a bar that we set for the entire pipeline to say yay or nay, whether I should trust the output or not. And if the attacks themselves are shifting towards compromising the pipeline, that bar or that confidence level might work against you.”
Detecting problems in the generative AI pipeline is challenging, as issues often accumulate gradually over time. However, security frameworks like MITRE ATLAS and the OWASP Top 10 provide support in addressing known security issues related to generative AI. Nevertheless, considering the rapidly evolving nature of the technology, security measures must keep pace.
The exposure of intellectual property (IP) is also a significant risk when using off-the-shelf generative AI models. These models rely on vast amounts of data, some of which may be critical and sensitive to organizations. Ensuring data security and preventing unauthorized access is critical for protecting IP.
Furthermore, the lack of recency and specificity in standard off-the-shelf models can impact their utility. Retrieval-augmented generation (RAG) addresses this issue by incorporating real-time context and user-specific information into model output. RAG allows models to learn over time, minimize errors, and keep private data and IP protected.
As generative AI becomes more pervasive, organizations need to adopt a zero-trust approach. This means considering potential vulnerabilities at every stage of the pipeline, from data acquisition to model deployment and application usage. Documentation and security policies play a crucial role in ensuring accountability and prioritizing security measures.
“Make sure that you are capturing the intent of what you want to do: cataloging the data sources, cataloging the models that are being used in production and that are being used to train and iterate on this pipeline,” Pandey advises. “Then catalogue the applications themselves, and categorize these applications based on how critical they are, and make sure that your security policies and your guardrails actually match that criticality.”
Adding layers of security to the infrastructure is vital, as a single failure could compromise the entire system. Mitigating risks in a generative AI environment requires new techniques and stochastic processes to identify and address security issues effectively.
“The way to tackle security in a generative AI environment is through stochastic processes — building AI models that handle security in other models, flagging issues in generated content when things go haywire,” suggests Pandey.
Ultimately, trust becomes a crucial business Key Performance Indicator (KPI). Security directly impacts user trust, their experience, and the success of generative AI solutions. Ensuring the security of applications and models is essential to maintain user confidence and drive revenue.
However, striking the right balance between innovation and security is an ongoing process. With time, organizations will need to adapt and remain agile as both generative AI technology and its security measures evolve.
“Assume a zero-trust mindset, build defense in depth, and assume that you need to consider the risks of an opaque box, as well as access to the internal pipeline itself, all the way from the data to the app itself,” advises Pandey. “And remember where you start today and where you’ll be three years down the line is going to be very different because both the generative AI vector, as well as the security around it, are both evolving rapidly.”