The Rapid Progress of AI: Experts Predict Potential and Risks

Artificial Intelligence (AI) is advancing at an astounding rate, revolutionizing not only enterprise but everyday life. The pace of development has taken even those working in the field by surprise and has raised concerns about the rapid progress of AI.

New Survey Reveals Concerns and Predictions

In the “2023 Expert Survey on Progress in AI,” researchers from AI Impacts, the University of Bonn, and the University of Oxford sought the opinions of 2,778 authors whose work appeared in top industry publications and forums. The survey revealed some startling predictions:

  • Unaided machines could outperform humans in every task with a 10% probability by 2027 and a 50% probability by 2047.
  • Fully automating all human occupations has a 10% chance of being achieved by 2037.
  • There is a 10% chance that advanced AI could cause “severe disempowerment” and potentially lead to the extinction of the human race.

“While the optimistic scenarios reflect AI’s potential to revolutionize various aspects of work and life, the pessimistic predictions – particularly those involving extinction-level risks – serve as a stark reminder of the high stakes involved in AI development and deployment,” researchers reflected.

Shifting Perspectives and Accelerating Progress

This survey marks the third in a series, with previous studies conducted in 2016 and 2022. The opinions and projections expressed in the latest survey have changed significantly, possibly due to several major advancements in the field:

  • The launch of various AI models like ChatGPT, Anthropic’s Claude 2, Google’s Bard and Gemini.
  • The dissemination of two AI safety letters.
  • Government actions in the U.S., UK, and EU.

As a result, expectations for the feasibility of specific AI tasks have shifted earlier, with predictions for more than 21 out of 32 tasks moving forward within just one year.

Anticipating Milestones and Forecasting Risks

Respondents were also queried on various milestones and potential risks:

  • High-Level Machine Intelligence (HLMI): Achieving human-level performance in tasks with unaided machines has a 50% chance by 2047 (13 years earlier than previously predicted in 2022).
  • Full Automation of Labor (FAOL): The complete automation of occupations by unaided machines has a 50% chance by 2116 (a decrease of 48 years from the previous year’s projection).

While concerns about the risks of AI systems exist, such as alignment, trustworthiness, and predictability, respondents were especially worried about the following:

  • Dissemination of false information through deepfakes.
  • Manipulation of public opinion by AI.
  • Groups using AI for creating powerful tools, like viruses.
  • Authoritarian rulers leveraging AI for population control.
  • A potential increase in economic inequality due to AI systems.

Given these concerns, there is a strong consensus among experts that AI safety research should be prioritized along with the continuous evolution of AI tools.

An Uncertain Future with Both Promise and Risks

Participants in the survey were divided in their outlook on the impact of AI:

  • More than half (68%) believed in the likelihood of positive outcomes and advancements.
  • Close to 58% considered extremely adverse outcomes to be a “nontrivial possibility.”
  • Approximately half expressed a greater than 10% chance of human extinction or severe disempowerment.
  • “On the very pessimistic end, one in 10 participants put at least a 25% chance on outcomes in the range of human extinction,” the researchers reported.

The researchers emphasize that while experts have knowledge of AI technology and past progress, predicting the future remains challenging for them. Their abilities to forecast are similar to those without expertise in the field.

While acknowledging the limitations of forecasting, the researchers assert that AI researchers can contribute to the accuracy of collective predictions about the future, which rely on educated guesses.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts