Challenges in the Age of Generative AI
The emergence of Large Language Models (LLMs) and generative AI poses new security challenges. Traditional security measures are inadequate for technologies that redefine data access.
Adapting Security Programs
To embrace generative AI safely, security providers must update their programs to address novel risks. An entire industry is forming around LLMs, and employees increasingly turn to intermediary tools, creating shadow IT.
“Using a Chrome extension to write a better sales email doesn’t feel like a vendor, but it’s introducing new data security risks.”
Understanding Data Access
Managing data access boundaries is essential. Organizations must ensure that individuals accessing LLMs have permission for the underlying data sources. Privacy concerns are magnified when dealing with personal information, affecting compliance and data handling.
Adapting Security Programs
Vendor, enterprise, and product security programs must adapt to these evolving risks. Vendors should meet security and privacy standards, and due diligence must include questions specific to generative AI.
Your organization’s approach to friction and usability may require adjustments to control browser extensions and OAuth applications that interact with SaaS tools.
Setting Expectations and Collaboration
Untrusted intermediary applications, often in the form of browser extensions, pose risks of sending data to unapproved third parties. Communicate expectations to employees and collaborate with legal teams to establish AI policies.
Product Security
Product security entails transparency in data usage and respecting security boundaries. Ensure that customers can only access models relevant to their data access permissions. Offer opt-in/opt-out options for gen AI features.
Embracing these changes is vital for progress, as generative AI tools can enhance a company’s success, provided security concerns are effectively addressed.
Rob Picard is the Head of Security at Vanta.