While it's exciting to read about Sam Altman's enthusiasm for the superintelligence age we're entering, I can't help but feel that something crucial is being overlooked. Like many in big tech working on AI, there's a tendency to sidestep what I call the "chicken and the egg" problem of AI:Â
Does Superintelligence come before safe AI, or does safe AI come before Superintelligence?
I firmly believe that Superintelligence is a prerequisite for creating safe AI for all.
And here's why:
The Limitations of Narrow Deep Learning
Current AI systems, particularly those relying on deep learning —which Sam Altman praises for its effectiveness— are inherently narrow and linear. Deep learning has indeed worked wonders in specific domains, but it's not without its limitations. Its narrow focus doesn't align well with the complex, layered realities of life and society.
These AI models excel at the tasks they're trained on but falter when faced with new, unstructured challenges. They're not interdisciplinary, scalable, or adaptable. They lack the ability to consider the myriad variables present in dynamic environments, sticking rigidly to their predefined goals without true awareness or understanding.
The Illusion of Intelligence
This narrow focus creates an illusion of intelligence. An AI might perform exceptionally well in a controlled setting but reveals its shortcomings when placed in a different context. It's like training a chess grandmaster who can't play checkers; highly skilled in one area but lost in another.
Why Superintelligence First?
To achieve safe AI that benefits everyone, we need to move beyond these limitations. Superintelligence isn't just about being smarter; it's about developing AI that understands context, adapts to new situations, and considers the broader implications of its actions.
Deploying narrow deep learning models at scale without this level of understanding can lead to unintended consequences. Imagine an AI making decisions without fully grasping the societal or ethical implications—that's a recipe for disaster.
The Path Forward
We must prioritize developing superintelligent systems that can comprehend and navigate the complexities of human life. Only then can we ensure that AI technologies are not just powerful but also safe, ethical, and beneficial for all.
It's a challenging journey, but one we must undertake carefully. Superintelligence isn't just a milestone; it's a necessary foundation for building the safe AI future we all envision.
Comments