Partnership between Dell and Hugging Face Simplifies Deployment of Generative AI

Almost every enterprise today is exploring the possibilities of large language models (LLMs) and generative AI for their business. However, deploying this complex technology and addressing concerns such as data security, privacy, and fine-tuning can be daunting. To help overcome these hurdles, Dell and Hugging Face have partnered to simplify the deployment of customized LLMs on-premises and maximize the potential of this evolving technology.

“The impact of gen AI and AI in general will be significant, in fact, it will be transformative,” said Matt Baker, SVP for Dell AI strategy. “This is the topic du jour, you can’t go anywhere without talking about generative AI or AI,” he added. “But it is advanced technology and it can be pretty daunting and complex.”

Simplifying Deployment with the Dell and Hugging Face Partnership

Through their partnership, Dell and Hugging Face will create a new Dell portal on the Hugging Face platform. This portal will provide custom containers, scripts, and technical documents for deploying open-source models on Hugging Face using Dell servers and data storage systems. Initially, this service will be available for Dell PowerEdge servers and will be accessible through the APEX console. The portal will be expanded to include Precision and other Dell workstation tools. Additionally, the portal will release updated containers with optimized models for Dell infrastructure to support new-gen AI use cases and models.

“The only way you can take control of your AI destiny is by building your own AI, not being a user, but being a builder,” said Jeff Boudier, head of product at Hugging Face. “You can only do that with open-source.”

This partnership is part of Dell’s efforts to establish itself as a leader in generative AI. The company has recently added ObjectScale XF960 to its ObjectScale tools line, which is an all-flash appliance designed for AI and analytics workflows. Dell has also expanded its gen AI portfolio to encompass model customization, tuning, and deployment.

“I’m trying to avoid the puns of Dell and Hugging Face ’embracing’ on behalf of practitioners, but that’s in fact what we are doing,” joked Baker.

Overcoming Challenges in Enterprise Adoption of Gen AI

Enterprise adoption of gen AI comes with various challenges. Complexity, closed ecosystems, time-to-value, vendor reliability and support, ROI, and cost management are some of the reported issues. Moving gen AI projects from proof of concept to production can also be a challenge, similar to the early days of big data. Organizations are also concerned about the exposure of their data while leveraging it for insights and automation.

“Today a lot of companies are stuck because they’re being asked to deliver on this new generative AI trend, while at the same time they cannot compromise their IP,” Boudier pointed out.

According to Dell research, enterprises prefer on-prem or hybrid implementations for gen AI, especially when it comes to protecting their most precious intellectual property.

“There’s a significant advantage to deploying on-prem, particularly when you’re dealing with your most precious IP assets, your most precious artifacts,” said Baker.

The Dell Hugging Face Portal: Simplifying Deployment and Customization

The Dell Hugging Face portal will offer curated sets of models selected based on performance, accuracy, use cases, and licenses. Organizations can choose their preferred model and Dell configuration for deployment within their infrastructure.

“Imagine a LLama 2 model specifically configured and fine-tuned for your platform, ready to go,” Baker explained.

Use cases for the Dell Hugging Face portal include marketing and sales content generation, chatbots and virtual assistants, and software development. The aim of this portal is to simplify the process of building gen AI applications by reducing complexity and providing preconfigured capabilities.

“We’re going to take the guesswork out of being a builder,” said Baker. “It’s the easy button to go to Hugging Face and deploy the capabilities you want and need in a way that takes away a lot of the minutiae and complexity.”

What sets this offering apart from others is Dell’s ability to fine-tune models from top to bottom. This enables enterprises to quickly deploy the best configuration for a given model or framework without exchanging any data with public models. The fine-tuning process can be time-consuming, but Dell aims to simplify it further by providing a containerized tool based on techniques like LoRA and QLoRA.

“Techniques like RAG are a way of not having to build a model, but instead providing context to the model to achieve the right generative answer,” explained Baker.

Dell believes that going forward, all enterprises will have their own vertical, using their specific data combined with a model to provide a generative outcome.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts