The Importance of AI Security and Compliance in the Enterprise World

The AI marketplace is an incredibly dynamic one, especially in the year since OpenAI publicly launched ChatGPT. Survey after survey shows that enterprises are moving fast to consider and embrace new AI tools. But as they do, how are they ensuring the AI solutions they are bringing onboard for their employees and customers are working reliably, securely, and are compliant with whatever the applicable rules and regulations are for said company in the jurisdictions they operate? Enter Cranium.

A Custom Software Solution for AI Security and Compliance

The New Jersey-based startup, incubated within professional services giant KPMG and which emerged from stealth in April 2023, offers a custom software solution that allows enterprises to assess AI security risks and compliance without disrupting existing workflows. “The level of experimentation has gone through the roof,” said founder and CEO Jonathan Dambrot, in a videconference interview with VentureBeat. “Every single technology product is now integrating AI — either has done it or has a plan to do it over the course of the next six months to 12 months. So this is where it becomes really important in understanding how people are using the AI.”

Investors agree, as today, Cranium announced its Series A funding round to the tune of $25 million. The round was led by Telstra Ventures with participation from KPMG LLP and SYN Ventures, taking Cranium’s total capital raise to-date to $32 million.

Products and Services for AI Governance

Cranium offers several products and services organized around four goals: discovery, monitoring, creating transparency, reporting and compliance. One solution is private AI dashboards that allow its customer organizations to track how they are using AI and what data the AI models they are using have access to, and where that data is going within and outside of the organization. “When we look at the market, the interesting part is the AI governance role,” Dambrot told VentureBeat. “We think of ourselves as a platform to help support that process, and it starts with the question: ‘how do we give visibility to AI services?'”

Cranium’s Connectors, secure software which helps to monitor in realtime and assess how AI is being used at its client organizations, support most major AI development environments, models, and frameworks, including Azure, Azure OpenAI, AWS Sagemaker, Google VertexAI, Databricks, MLflow, Dataiku, and DataRobot.

Another offering, the “AI Card,” introduced over the summer of 2023, allows Cranium’s customers to plug their AI applications into Cranium’s secure software assessment tools and generate a discrete file with information about the value, purpose, data, and governance. Companies can upload evidence that supports each of these areas. Then, they can share the AI Card out with third-parties as requested, including on their own websites, with government agencies, or even with customers and new clients.

Cranium further generates an “AI Maturity Score,” which assesses the vulnerabilities of their AI stack using Cranium’s read-teaming exercises to expose and plug gaps across libraries, data repositories and lakehouses/warehouses, pipelines, and of course, the models themselves. The score is a percentage out of 1-100, with a higher number being a better, more mature and secure AI stack. The Maturity Score helps organizations with “understanding what’s there [in terms of AI being used inside their companies,] and the risk of those AI systems because, in most cases, governance groups and the security teams really don’t have that visibility,” Dambrot said.

“It’s like ‘Bring Your Own Device’ with the iPhone all over again,” said Dambrot, noting that many employees are using AI tools to do work that aren’t necessarily cleared by management, but which nonetheless need to be tracked and monitored to ensure compliance and security. He cited the hypothetical example of an employee who decides to start taking photos of their company’s datacenter and uploading them to ChatGPT’s new computer vision mode to ask it for tips on re-architecting or writing policies. While a legitimate use case that could be helpful to the company, it also comes with risks, which Cranium’s connectors and offerings can help the company management and security teams understand and mitigate. “You don’t know where this data is going,” Dambrot noted. “You don’t know how the models are being trained.”

Cranium itself uses AI and machine learning (ML), specifically in code completion and software development. “We are investing heavily on driving better code development with the use of AI,” said Dambrot, which includes “use of AI in the product, use of AI to help build, including QA [quality assurance] testing and other areas. We bring all of our assets into that, including our human assets using our AI systems. We monitor those and then we look at our own AI Card requirements…we’re drinking our own champagne.”

Though a young company, Cranium already counts a number of customers across sectors as diverse as health sciences, financial services, consumer packaged goods, and retail. Marcus Bartram, General Partner at Telstra Ventures, expressed his enthusiasm about Cranium’s solutions in a statement provided in a press release. “Cranium stands at the forefront of AI security and trust software, empowering organizations to navigate the crowded cybersecurity industry with its groundbreaking product and pioneering innovations,” he said. Telstra Ventures has a history of backing standout disruptors, having made 96 investments that led to 38 liquidity events, including big names like CrowdStrike, DocuSign, and Box. The firm recently announced its third fund, which takes its funds under management to $1 billion. The injection of funds aims to fuel various areas of the company, from R&D and business expansion to marketing efforts. By bolstering its Enterprise software platform, Cranium plans to provide organizations with a more secure and compliant AI/ML environment.

The company is already well positioned to help its customers comply with the still in-process but rapidly looming EU AI Act, which Dambrot described as “almost like GDPR from a privacy perspective.” In addition, Dambrot said “we’re working on some things that are going to be launching next early next year on further being able to provide visibility, especially in a GenAI environment…I liken it to putting like brakes on a race car. If you try to go 200 miles-per-hour in your race car and take a corner with no brakes, you’re in trouble. We’re like the brakes that are enabling everyone to go faster and experiment more.”

In a world where AI adoption is rising quickly, Cranium aims to ensure that organizations don’t have to choose between innovation and security. By developing robust solutions focused on trust, visibility, and compliance, the company is geared to set new industry standards for AI security.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts