Will transformer derived architectures accelerate progress in deep learning?
Question description
OpenAI's transformer based GPT-3 has generated a lot of hype around the capabilities of current methods in deep learning. GPT-3 seems to be capable of creative works of writing as shown by Gwern. This creative potential, if applied to scientific writing or code generation, may accelerate research progress. If successfully applied to deep learning research, this acceleration may be self-reinforcing potentially having implications on the development of an AGI system. Indeed the Metaculus question "When will the first Artificial General Intelligence system be devised, tested, and publicly known of?" updated 10 years forward in the months following the announcement of GPT-3.
Indicators
Indicator | Value |
---|---|
Stars | ★★★☆☆ |
Platform | Metaculus |
Number of forecasts | 221 |
Capture
OpenAI's transformer based GPT-3 has generated a lot of hype around the capabilities of current methods in deep learning. GPT-3 seems to be capable of creative works of writing as shown by Gwern. This creative potential, if applied to scientific...
Embed
<iframe src="https://https://metaforecast.org/questions/embed/metaculus-5173" height="600" width="600" frameborder="0" />