Amidst the pursuit of using AI to drive business gains, Iterate, a California-based company known for deploying AI and emerging technologies, has introduced AppCoder LLM. This fine-tuned model is designed to instantly generate working and updated code for production-ready AI applications using natural language prompts, eliminating the need for coding. Integrated into Iterate’s Interplay application development platform, AppCoder LLM outperforms existing AI-driven coding solutions and provides developer teams with accurate code for their AI solutions.
Revolutionizing Application Development
At its core, Iterate Interplay is a fully containerized drag-and-drop platform that connects AI engines, enterprise data sources, and third-party service nodes. Developer teams can customize each node with custom code. AppCoder LLM simplifies the code generation process by allowing users to give instructions in natural language, making it accessible even to those without coding expertise. The model can handle various computer vision libraries such as YOLOv8, LangChain, and Google libraries, providing developers with the ability to build advanced object detection applications and chatbots effortlessly.
“Interplay-AppCoder can handle computer vision libraries such as YOLOv8 for building advanced object detection applications. We also have the ability to generate code for LangChain and Google libraries, which are among the most commonly used libraries (for chatbots and other capabilities),” said Brian Sathianathan, CTO of Iterate.ai.
The speed and efficiency of AppCoder LLM are impressive. For instance, it was tested by building a core, production-ready detection app in just under five minutes—a remarkable acceleration in app development. This acceleration not only saves costs but also increases team productivity, enabling them to focus on strategic initiatives essential to business growth.
Superior Performance and Results
AppCoder LLM’s performance outshines its competitors. In an ICE Benchmark comparing the 15B versions of AppCoder and Wizardcoder models, the Iterate model had a significantly higher functional correctness score (2.4/4.0 versus 0.6/4.0) and usefulness score (2.9/4.0 versus 1.8/4.0). The higher functional correctness score indicates a better ability to conduct unit tests and effectively consider the given question and reference code. The usefulness score demonstrates that the output from the model is clear, logical, readable to humans, and covers all functionalities of the problem statement.
“Response time when generating the code on an A100 GPU was typically 6-8 seconds for Interplay-AppCoder. The training was done in a conversational question>answer>question>context method,” explained Sathianathan.
With meticulous fine-tuning of CodeLlama-7B, 34B, and Wizard Coder-15B, 34B on a hand-coded dataset of LangChain, YOLO V8, VertexAI, and other modern generative AI libraries, AppCoder LLM was able to achieve these impressive results.
While AppCoder is now available for testing and use, Iterate has larger goals. The company is currently building 15 private LLMs for large enterprises and is focused on driving scalability by bringing the models to CPU and edge deployments. Iterate aims to simplify the development of AI/ML apps and plans to continue expanding its platform and toolset to adapt to emerging models and data heaps.
“Iterate will continue to provide a platform and expanding toolset for managing AI engines, emerging language models, and large data sets, all tuned for rapid development and deployment (of apps) on CPU and edge architectures. The space is rapidly expanding—and also democratizing—and we will continue to push innovative new management and configuration tools into the platform,” stated the CTO.
With its revenue nearly doubling over the past two years and a diverse range of customers in sectors such as banking, insurance, entertainment, and retail, Iterate is establishing itself as a leading AI company.