It has been a year since OpenAI quietly launched ChatGPT as a “research preview,” a chatbot based on a large language model (LLM). LLMs are a particular implementation of transformer neural networks, a technology that first surfaced in a 2017 paper from Google. ChatGPT provided a user-friendly interface to the underlying LLM GPT-3.5 and became the fastest growing consumer technology ever, surpassing a million users within five days after its launch. Now, there are hundreds of millions of ChatGPT users. Not only that, there is a plethora of similar bots on top of differing LLMs from multiple companies. The most recent is Amazon Q, a business-oriented chatbot.
These technologies may upend creative and knowledge work as we know it. For example, an MIT study last summer focused on tasks like writing cover letters, delicate emails, and cost-benefit analyses. The results showed that using ChatGPT “decreased the time it took workers to complete the tasks by 40%, and output quality, as measured by independent evaluators, rose by 18%.”
People compare the technology to electricity and fire because like these fundamental discoveries, AI is a transformative technology that has the potential to radically change almost every aspect of our lives, from how we work and communicate to how we solve complex problems, much like electricity revolutionized power and industry and fire transformed early human society.
The Impact on the Economy and Regulation
Consulting giant McKinsey has estimated that generative AI will add more than $4 trillion a year to the global economy. Consequently, technology giants including Microsoft and Google have vigorously pursued this market. Debates about the impact of the technology and its safety have swirled since the appearance of ChatGPT. From the U.S. Congress to Bletchley Park — once the home for secret British code-breaking activities during World War II — these debates have basically fallen into two perspectives: AI “accelerationists” and “doomers.”
Basically, the accelerationists advocate for rapid advancement in AI technology, emphasizing the enormous potential benefits. Conversely, the “doomers” desire a cautious approach that emphasizes the potential risks associated with unbridled AI development. This has prompted the first substantive actions on AI regulation. While the EU AI Act that has been in development for several years may still come to fruition, the U.S. has moved forward with a sweeping Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence.” The order aims for a balanced approach between unfettered development and stringent oversight. Countries worldwide are vigorously pursuing AI strategies in response to the LLM revolution.
The Emergence of Q*
Russian President Vladimir Putin recently announced plans for a new Russian strategy for AI development to counter Western influence over the technology. He is late to the party as the U.S., China, the U.K., and others are already far down this path. Odd, too, that he should launch this now since he famously said in 2017 that the nation that leads in AI “will be the ruler of the world.”
All of which is to say that this last year in AI has been a whirlwind. We might have thought this whirl reached its apex recently when OpenAI’s board of directors fired Sam Altman. But the CEO was back in less than a week after an investor and employee revolt — and instead, the board is gone. Now there is a new mystery around OpenAI. The secretive project Q* (pronounced “Q-star”) has now emerged as the next big news item. The researchers assigned the name “Q” to represent the “Quartermaster,” the top-secret brainiac who builds gadgets for the James Bond movie character.
According to Reuters, the OpenAI board received a letter from researchers about advances in this heretofore unknown project only days before they fired Altman, allegedly for poor communications. The letter warned the board that Q* could threaten humanity. There is speculation that the board did not know about Q*, and that this might have been the primary reason for firing Altman. This possibility seems unlikely, however, since Ilya Sutskever was both the Chief AI scientist and a board member. Backing that up is a report from Platformer that states: “I can report that the board never received any such letter about Q*.”
Rumors are now swirling about what Q* could be: A new neuro-symbolic architecture (which would be a significant development), or a more modest but still impressive synthesis of LLMs plus several known techniques to produce something better than the current state of the art. An effective neuro-symbolic architecture does not yet exist at scale, but such a system could enable an AI to learn from less data while better explaining their behavior and logic. Several companies and academic institutions are working to develop this, including IBM, which believes this architecture is a “pathway to achieve artificial general intelligence” (AGI). AGI is still ill-defined, but is generally viewed as the ability to process information at a human-level or even exceed human capabilities, all at machine speed.
As reported by The Atlantic, Q* is most likely short of this neuro-symbolic breakthrough. Nevertheless, if the Q* advance comes to market, it will be another step towards AGI — which, by the way, Nvidia CEO Jensen Huang said might be achieved within five years. Microsoft President Brad Smith has a somewhat different view, as reported by Reuters: “There’s absolutely no probability that you’re going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It’s going to take years, if not many decades…”.
As we have seen, breakthroughs like ChatGPT and the techniques that might power systems like Q* have unleashed waves of optimism, apprehension, regulation, competition, and speculation. The rapid AI advancements in the past year are not just technological milestones but also a mirror reflecting our relentless quest for knowledge and mastery over our own creations. The coming year is certainly shaping up to be just as exciting and nerve-rattling as the last one. Where we go from here depends on how successfully we can channel that energy and guidance.
– Gary Grossman, EVP of the technology practice at Edelman and global lead of the Edelman AI Center of Excellence.