The Allen Institute for AI (AI2) Introduces OLMo: a Groundbreaking Open Source LLM and Framework

The Allen Institute for AI (AI2), a non-profit research institute founded in 2014 by the late Microsoft co-founder Paul Allen, announced today that it has introduced the open source OLMo, which it calls the “first truly open LLM and framework,” creating an “alternative to current models that are restrictive and closed” and driving a “critical shift” in AI development.

Advancing Open Source AI

The news comes at a moment when open source/open science AI, which has been playing catch-up to closed, proprietary LLMs like OpenAI’s GPT-4 and Anthropic’s Claude, is making significant headway. For example, yesterday the CEO of Paris-based open source AI startup Mistral confirmed the ‘leak’ of a new open source AI model nearing GPT-4 performance. And on Monday, Meta released a new and improved version of its code generation model, Code Llama 70B, as many eagerly await the third iteration of its Llama LLM.

However, open source AI continues to come under fire by some researchers, regulators and policy makers — a recent, widely-shared opinion piece in IEEE Spectrum, for instance, is titled “Open-Source AI is Uniquely Dangerous.”

The OLMo Framework: Committed to Transparency and Accessibility

The OLMo framework’s “completely open” AI development tools, available to the public, includes full pretraining data, training code, model weights, and evaluation. It provides inference code, training metrics, and training logs, as well as the evaluation suite used in development — 500+ checkpoints per model, “from every 1000 steps during the training process and evaluation code under the umbrella of the Catwalk project.”

The researchers at AI2 said they will continue to iterate on OLMo with different model sizes, modalities, datasets, and capabilities.

“Many language models today are published with limited transparency,” said Hanna Hajishirzi, OLMo project lead, a senior director of NLP Research at AI2, and a UW professor, in a press release. “Without having access to training data, researchers cannot scientifically understand how a model is working. It’s the equivalent of drug discovery without clinical trials or studying the solar system without a telescope,” said.

Nathan Lambert, an ML scientist at AI2, posted on LinkedIn saying that “OLMo will represent a new type of LLM enabling new approaches to ML research and deployment because, on a key axis of openness, OLMo represents something entirely different. OLMo is built for scientists to be able to develop research directions at every point in the development process and execute on them, which was previously not available due to incomplete information and tools.”

Jonathan Frankle, chief scientist at MosaicML and Databricks, called AI2’s OLMa release a “A giant leap for open science,” while Hugging Face CTO posted on X that the model/framework is “pushing the envelope of open source AI.”

“Open foundation models have been critical in driving a burst of innovation and development around generative AI,” he said. “The vibrant community that comes from open source is the fastest and most effective way to build the future of AI.” – Yann LeCun, Meta chief scientist

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts