Stanford Study Reveals Lack of Transparency in AI Language Models

Today, Stanford University’s Center for Research on Foundation Models (CRFM) released the findings of their comprehensive evaluation of AI large language models (LLMs), also known as foundation models. In response to the increasing societal impact of AI, the CRFM introduced the Foundation Model Transparency Index to highlight the crucial need for transparency in these models. The results of the study were concerning, exposing a fundamental lack of transparency among major model developers within the AI industry.

The Transparency Index Findings

According to the researchers at CRFM, no major foundation model developer came close to providing adequate transparency, with the highest overall score being only 54%. This revelation underscores the urgent necessity for transparency, as it is crucial for public accountability, scientific innovation, and effective governance. Interestingly, open models, such as Meta’s Llama 2 and Hugging Face’s BloomZ, scored the highest in the index. However, it is worth noting that OpenAI’s proprietary model, GPT-4, secured third place ahead of Stability’s Stable Diffusion.

“It’s not that the open source models are gaining 100% and everyone else is getting zero, there is quite a bit of nuance here. That’s because we consider the whole ecosystem – the upstream dependencies, what data, what labor, what compute went into building the model, but also the downstream impact on these models.” – CRFM Society Lead Rishi Bommasani

The evaluation conducted by Rishi Bommasani, CRFM Society Lead, and his team, including CRFM Director Percy Liang, involved assessing 10 major foundation model developers. These developers included OpenAI, Anthropic, Google, Meta, Amazon, Inflection, Meta, AI21 Labs, Cohere, Hugging Face, and Stability. Each developer’s flagship model was examined based on their transparency regarding the model’s construction, usage, and underlying data. The evaluation comprised 15 specific categories, encompassing aspects such as data, labor, compute, and downstream impact. Notably, the team had previously examined model compliance with the EU AI Act, which informed their approach to transparency in this study.

Importance of Transparency

According to CRFM Director Percy Liang, the Foundation Model Transparency Index represents a broader understanding of transparency that goes beyond the simple classification of models as proprietary or open-source. Liang notes that transparency encompasses the entire ecosystem encompassing the model, evaluating upstream factors such as data, labor, and compute, as well as the downstream impact of these models.

“The basic point is that transparency matters. The companies are not homogenous about what they’re doing. It’s not like all of them are good at data and bad at disclosing some compute.” – CRFM Director Percy Liang

Furthermore, the study highlighted the variations in transparency practices among different companies. For instance, Hugging Face’s model, Bloom, excels at risk evaluation. However, this analysis of risk and mitigation was not carried over to their subsequent model, BloomZ. This demonstrates the need for companies to consistently prioritize transparency across all aspects of their AI models.

Addressing the low transparency scores of some models, Bommasani explains that this is not indicative of inherent flaws in the models themselves. Instead, it reflects the lack of established norms and standards in certain transparency categories, particularly for companies that have entered the AI space relatively recently. He remains hopeful that this study will motivate companies to embrace transparency as an essential expectation in the industry.

  • According to Bommasani, “There is really no reason those scores couldn’t be higher. I think it’s just a matter of Amazon coming into this later than, say, OpenAI,” highlighting the potential for improvement.
  • Bommasani further emphasizes, “Transparency is not a monolithic concept. It’s not like all of them are good at data and bad at disclosing some compute.”

Liang concludes by stating that the Foundation Model Transparency Index serves as a framework for considering transparency and that the results represent a snapshot of the current landscape. He anticipates that companies will face increasing pressure to enhance transparency and expects to witness significant progress in the coming months. Encouragingly, he believes that many changes needed to improve transparency will be relatively simple to implement.

“Others are harder, but I think there’s just low or medium-hanging fruit that companies really ought to be doing. I’m optimistic that we’re going to see some change in the coming months.” – CRFM Director Percy Liang

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts