Databricks Announces New Retrieval Augmented Generation (RAG) Tooling to Improve AI Applications

Databricks, the leading data ecosystem major, has unveiled its latest tooling called Retrieval Augmented Generation (RAG) for its Data Intelligence Platform. This new tooling aims to assist customers in building, deploying, and maintaining high-quality LLM (large language model) apps for diverse business use cases. The tools, now available in public preview, address the challenges faced in developing production-grade RAG apps, including serving real-time business data, combining data with appropriate models, and monitoring applications for issues like toxicity and accuracy.

“While there is an urgency to develop and deploy retrieval augmented generation apps, organizations struggle to deliver solutions that consistently deliver accurate, high-quality responses and have the appropriate guardrails in place to prevent undesirable and off-brand responses,” said Craig Wiley, Senior Director of Product for AI/ML at Databricks.

Enhancing Accuracy and Reliability with Retrieval Augmented Generation (RAG)

Large language models are gaining popularity due to their ability to respond to general prompts quickly. However, these models often lack up-to-date and specific knowledge for internal business needs. To address this issue, enterprises turn to retrieval augmented generation or RAG. RAG involves tapping into specific data sources to improve model accuracy and response quality. However, the RAG process is complex and fragmented, which often leads to underperforming RAG apps.

Databricks’ Solution: RAG Tools in the Data Intelligence Platform

Databricks’ new RAG tools in its Data Intelligence Platform aim to solve the challenges faced by teams in developing and deploying high-quality RAG apps. With features like vector search, feature serving capabilities, and unified AI playground, teams can combine all aspects of RAG development, prototype apps, and ship them into production.

  • The new vector search and feature serving capabilities eliminate the need for complex data loading pipelines, ensuring RAG apps have access to the most recent and relevant business information.
  • The unified AI playground and MLFlow evaluation enable developers to access and evaluate models from different providers, ensuring they deploy the best-performing and most affordable model.
  • The foundation model APIs offer fully managed LLM models served from within Databricks’ infrastructure, providing cost and flexibility benefits with enhanced data security.

Databricks’ fully-managed Lakehouse Monitoring capability allows for tracking RAG app performance in the production environment at scale. It automatically scans app responses for toxicity and other unsafe content, empowering teams to take immediate action. Lakehouse Monitoring is integrated with model and dataset lineage, facilitating error detection and root cause analysis.

“Managing a dynamic call center environment for a company our size, the challenge of bringing new agents up to speed amidst the typical agent churn is significant. Databricks provides the key to our solution… By ingesting content from product manuals, YouTube videos, and support cases into our Vector Search, Databricks ensures our agents have the knowledge they need at their fingertips. This innovative approach is a game-changer for Lippert, enhancing efficiency and elevating the customer support experience,” stated Chris Nishnick, Data and AI leader at Lippert.

As the demand for LLM apps catering to specific topics rises, Databricks plans to invest heavily in its suite of RAG tooling to ensure customers can deploy high-quality apps based on their data to production, at scale. The company is committed to ongoing research and innovation in this space.

Note: The above information has been sourced from VentureBeat.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts