MetaforecastStatus
SearchToolsAbout

‌

‌
‌
‌
‌
‌
‌

Before 2028, will powerful open-source AI be regulated more tightly than closed-source AI, through newly-enacted US law?

Metaculus
★★★☆☆
25%
Unlikely
Yes

Question description #

In July 2024, Meta released Llama 3.1 405B, the biggest and best open-source model to date. Llama 3.1 outperforms the leading closed-source models, OpenAI’s GPT-4o and Anthropic’s Claude Sonnet 3.5, on multiple benchmarks, and expert consensus seems to be that it is overall only slightly behind. Mark Zuckerberg, who has for some time been something of an unpopular figure in tech, appears to be enjoying a wave of popularity amongst the Silicon Valley elite for his open-source manifesto, published at the same time as Llama 3.1’s release (Perrigo, 2024).

However, the prevailing view amongst AI safety experts is that open-source is dangerous. OpenAI, in fact, pivoted away from the open-source approach that inspired its name upon team members deciding that open-source is likely dangerous.

How could open-source be dangerous? The basic idea is that powerful AI in the hands of the masses makes it easy for anyone to use the AI for nefarious ends, such as cyberhacking or synthesizing novel viruses. (Expert consensus, at present, is that the offense-defense balance lies well towards offense—i.e., powerful AI cannot effectively defend against equally powerful AI.)

Currently, the Silicon Valley view is beating out the AI safety view in regulation:

The White House is coming out in favor of “open-source” artificial intelligence technology, arguing in a report Tuesday that there’s no need right now for restrictions on companies making key components of their powerful AI systems widely available.

“We recognize the importance of open systems,” said Alan Davidson, an assistant secretary of the U.S. Commerce Department, in an interview with The Associated Press.

As part of a sweeping executive order on AI last year, President Joe Biden gave the U.S. Commerce Department until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks of so-called open models.

The term “open-source” comes from a decades-old practice of building software in which the code is free or widely accessible for anyone to examine, modify and build upon.

However, open-source AI development involves more than just code and computer scientists differ on how to define it depending on which components of the technology are publicly available, and if there are restrictions limiting its use.<br/> —O’Brien, 2024

Indicators #

IndicatorValue
Stars
★★★☆☆
PlatformMetaculus
Number of forecasts11

Capture #

Resizable preview:
Before 2028, will powerful open-source AI be regulated more tightly than closed-source AI, through newly-enacted US law?
25%
Unlikely
Last updated: 2024-10-07

In July 2024, Meta released Llama 3.1 405B, the biggest and best open-source model to date. Llama 3.1 outperforms the leading closed-source models, OpenAI’s GPT-4o and Anthropic’s Claude Sonnet 3.5, on multiple benchmarks, and expert consensus seems...

Last updated: 2024-10-07
★★★☆☆
Metaculus
Forecasts: 11

Embed #

<iframe src="https://metaforecast.org/questions/embed/metaculus-27034" height="600" width="600" frameborder="0" />

Preview