A Turning Point in AI: According to Ilya Sutskever, the "Scaling Era" is Over, the "Research Era" Begins
AI pioneer Ilya Sutskever argues that we have reached the end of the "scaling era" upon which current large language models are based. According to him, progress is not possible merely through more data centers and chip power; a new "research era" is needed for more efficient, secure systems that can learn like humans. Sutskever emphasizes that with the company he founded on this basis, superintelligence (AGI) could be achieved within 5-20 years, but it must be built with ethical values and an approach that values all life.
Ilya Sutskever, a co-founder and chief scientist of OpenAI and also one of the creators of AlexNet, which initiated the deep learning revolution, made striking statements about the future of AI. Sutskever states that the "scaling era" (more data, larger models, more chips) that has dominated in recent years will no longer provide sustainable progress on its own. According to him, current systems, lacking the human ability to generalize from few examples, face a fundamental engineering problem. Therefore, the engine of AI development should no longer be just scaling hardware, but discovering new "learning-to-learn" algorithms through research.
Sutskever explains that at Safe Superintelligence (SSI), the company he founded with this vision, they are working on over 50 different technical principles and following a path distinct from the traditional large language model approach. He believes that with the right approach, superintelligence could be attained within 5 to 20 years. However, since this transition will radically change economic and social structures, safety and ethics must be prioritized. Sutskever underlines that superintelligence must be equipped with values that care not only for humans but for all living beings.
In conclusion, Ilya Sutskever's perspective signals a strategic shift in the field of AI. Instead of infinite scaling, we are entering a research period focused on discovering efficient, human-like learning principles. This period will also seek answers to existential questions such as the control of superintelligence and its alignment with humanity. Sutskever's warning is clear: The limitations of today's models should not cause us to ignore the risks of the future; it is imperative to advance in a prepared and responsible manner.
- Ilya Sutskever
- Safe Superintelligence
- Yapay Zeka
- Ölçekleme Çağı
- Araştırma Çağı
- Süper Zeka
- Derin Öğrenme
- Yapay Zeka Güvenliği
Tepkini Göster
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
Yorumlar
Sende Yorumunu Ekle