The Rise and Fall of Galactica: Lessons Learned from AI Language Models

One year ago, Meta released a research demo called Galactica, an open-source “large language model for science” trained on 48 million scientific papers, promising to summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more (Meta). However, Galactica’s reign was short-lived, lasting only three days before it was taken down due to concerns over hallucinations in its output (Meta).

Outcry and Controversy

Galactica faced criticism for its unscientific and sometimes offensive output, which included information that sounded plausible but was factually incorrect (Meta). Yann LeCun, Meta’s chief scientist, defended the model on Twitter but was unsuccessful in changing the public perception (LeCun). The downfall of Galactica marked a missed opportunity for it to be a game-changing model in the generative AI era (Meta).

The Arrival of ChatGPT

Despite the setback with Galactica, rumors of OpenAI’s GPT-4 began circulating, creating anticipation among AI researchers (Meta). Just two weeks later, OpenAI released ChatGPT, another generative AI model, during the NeurIPS conference (Meta). However, ChatGPT also suffered from hallucination issues, producing eloquent and confident responses that often lacked factual accuracy (Meta). OpenAI acknowledged this weakness and acknowledged the challenge of fixing it in their blog post (OpenAI).

Nevertheless, ChatGPT quickly gained popularity and became one of the fastest-growing services, with an estimated 100 million monthly users within two months and now 100 million weekly users (Meta).

The Legacy of Galactica

Joelle Pineau, VP of AI research at Meta, reflected on the lessons learned from Galactica’s short-lived existence (Pineau). Pineau emphasized that Galactica was never intended to be a product but rather a research project (Pineau). However, the gap between expectation and the actual research was significant, leading to disappointment and misconceptions about its capabilities (Pineau).

Meta pulled down the Galactica demo to prevent further misuse and to ensure responsible usage (Pineau). Pineau admitted that Meta misjudged the expectations surrounding Galactica but stated that the lessons learned were incorporated into their next generation of models, starting with Llama (Pineau).

The Success of Llama

Llama, Meta’s large language model, made waves in the AI research world in February 2023, followed by the commercial release of Llama 2 in July and Code Llama in August (Meta). Llama marked a pivotal moment for open-source AI, sparking a heated debate in the AI community (Meta).

Meta’s commitment to open research was evident with Llama, although access to the model required filling out a form, likely due to the previous backlash experienced with Galactica (LeCun). Despite this precaution, the model weights of Llama were leaked a week after its release, causing further complications (Editor’s Note).

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts