There is no doubt that the pace of AI development has accelerated over the last year. Due to rapid advances in technology, the idea that AI could one day be smarter than people has moved from science fiction to plausible near-term reality.
Geoffrey Hinton, a Turing Award winner, concluded in May that the time when AI could be smarter than people was not 50 to 60 years as he had initially thought — but possibly by 2028. Additionally, DeepMind co-founder Shane Legg said recently that he thinks there is a 50-50 chance of achieving artificial general intelligence (AGI) by 2028. (AGI refers to the point when AI systems possess general cognitive abilities and can perform intellectual tasks at the level of humans or beyond, rather than being narrowly focused on accomplishing specific functions, as has been the case so far.)
This near-term possibility has prompted robust — and at times heated — debates about AI, specifically the ethical implications and regulatory future. These debates have moved from academic circles to the forefront of global policy, prompting governments, industry leaders and concerned citizens to grapple with questions that may shape the future of humanity.