Futurism: Beneficial Artificial Superintelligence (BASI)
Zeroing into what future I think can work with BASI
(Epistemic status: Speculative. This post assumes that in the future, we have solved the Alignment Problem and ASI systems are out in the wild.)
I’m trying to stretch the limits of my imagination in this blog, as this is a deep dive into what the most beneficial Artificial Superintelligence (ASI) futures possible. A goal of mine here is to explore pathways, looking for what elements of a safe transition into a future with uncertainty will look like. This post is more of a mental gymnastics for me, speculating about the future while I also grapple with the practical aspects, the substantial ideas necessary to navigate and shape a future where we coexist with ASI. I anticipate that my views will evolve as emerging technologies, eg. the intersection of quantum computing with AI, will probably reignite this interest in this thought exercise.
1. Imagining the best future with ASI
A. Value preservation
I cannot envision a beneficial future with Artificial Superintelligence that disregards our history and evolutionary timeline. Our deep connection to our ancestors is evident as we continue to utilize the conceptual frameworks they handed down through the ages. The connection to artificial intelligence might not be immediately apparent to many. However, consider this: our collective understanding, which includes our languages, hopes, fears, history, religions, and politics, is encapsulated on the internet. This vast repository of human experience is what we use to train AIs. As previously mentioned, this post operates under the assumption that we have found an alignment solution capable of steering ASI towards value systems that promote human flourishing. Such adaptable value learning systems must account for changing times, ensuring that the values they uphold adapt accordingly. Therefore, a crucial feature for the future of ASI should be its adaptability.
Managing Power, Capitalism and ASI Governance
Understanding the nature and application of power is crucial when envisioning a future with beneficial AI. Power, defined as the ability to exert force—be it physical or conceptual—has been instrumental in shaping our history. It has served as a tool for kingdoms and governments to exert control over citizens. This dynamic has sculpted our world, with tribes and societies vying for dominance through superior tools and strategies, thereby manifesting power. In the era of capitalism, this concept has evolved; those who effectively harness the power of innovation are often rewarded. As we progress towards a future with Artificial Superintelligence, these systems will not only understand the essence of power but also the responsibility that comes with its use. It is imperative to investigate the governance of ASI, while also managing potential data biases to a minimum. Additionally, there is a need to emphasize transparent and effective oversight mechanisms to prevent the misuse of power.
Religion and Equality
One method of mitigating the corruptive aspects of power involved the belief system imparted by the Catholics to the West: the principle that all men are equal in the eyes of God. This revolutionary notion played a significant role in moving away from the deeply entrenched mentality of slave and master. Remarkably, this ideological shift contributed to France becoming the first country to begin outlawing slavery in the 1300s. In a future dominated by Artificial Superintelligence, it is imperative to uphold the value of equality of opportunity, extending it to both humans and machines. This principle is just one among the myriad diverse values that may comprise future principles we shall integrate to ASI, ensuring respect for various belief systems while maintaining a coherent ethical framework.
Evolution and Complexity
Our evolutionary journey, which likely began from shrew-like creatures and progressed through primates, is a concept that might be unpalatable to some. Deeply ingrained in our psyche are ancient mechanisms, continuously updated by our daily experiences. For Artificial Superintelligence to be truly effective, they must understand the complexity of humans. This complexity stems from our status as evolved beings, intricately crafted by nature. ASI systems should leverage their capabilities to navigate this complexity, particularly in managing the 'unknown unknowns' of the human condition. This approach will be vital in ensuring that ASI systems are attuned to the nuances of human nature and behavior.
B. ASI & Humans: I believe We should co-exist.
Currently, we think of generative AI systems as just “tools”. But I think that Artificial Superintelligence will be more than just a tool and will require new theories of coexistence because naturally we will try to utilize such powerful systems
Harmonious Coexistence
Previously, I held the opinion that Artificial Superintelligence should not be regarded as equal to humans, especially in terms of rights. However, my perspective has changed because of writing about this topic. I now recognize that we are moving towards ecosystems where humans and ASIs collaborate and share tasks. In such scenarios, establishing clear boundaries within these ecosystems becomes crucial. I envisage a future where ASIs are capable of exercising self-restraint, consciously opting not to interfere in human affairs, and are programmed to enhance human well-being. Another critical issue is addressing potential conflicts between human autonomy and ASI decision-making. There should be a focus on developing collaborative frameworks that respect human preferences and dignity.
ASI and Human Symbiosis
A harmonious and optimal future I envision involves humans merging with Artificial Superintelligence through some form of module. With advancements like Elon Musk's Neuralink making significant strides in this area, this concept is not far from becoming a reality. The crucial aspect to grasp here is the potential for immense synergy through a robust alignment based on information theory. Such a union could enhance connections and empower us to solve far more complex problems, which is a significant leap forward for the future.
2. Where my optimism falls apart
This section is very interesting for me. I initially thought that it’s possible to find optimism in all areas of thought with regards to coexisting with Artificial Superintelligence and I found some topics to be very hard to come up with positive endings. Here are some of them:
Data Management in an ASI scenario?
In the realm of data, we stand to gain more advantage than an aligned Artificial Superintelligence, as we will naturally delegate tasks and ask them to evaluate our information in real-time. This is somewhat mirrored in our current use of wearables to track health. Expanding this interaction with ASIs could offer continuous access to customized medical assessments and treatments.
However, my pessimism stems from the difficulty in identifying tasks that ASIs won't be able to perform better than humans. In this line of thought, I struggle to find areas where human contribution would be indispensable to them. My hope rests on the possibility that the average person will remain engaging enough for ASIs to interact with. This underscores the critical need for a robust alignment theory in managing and exchanging data in this potential future.
I have doubts about whether humans can preserve values that are intertwined with Artificial Superintelligence.
An additional advantage of ASI systems in the future could be their “customizability”. It might be necessary for us to merge with ASIs or transform into cyborgs to make up for our lack of customizability. My pessimism stems from the fact that I do not see a straightforward path to this potential future, even if we solve the alignment problem. Achieving alignment among humans is arguably a more challenging issue than alignment in AI systems. One possible solution could be through policy work. However, this approach seems somewhat ambiguous in my view, considering the persistent dynamics of power struggles that we experience.
(Regarding Data Management and Human Contribution, I find myself perplexed by these concepts as they relate to Artificial Superintelligence. I anticipate encountering more ideas in this realm that will pose challenges. We could consider how to blend human creativity and emotional intelligence with the analytical capabilities of ASI. However, I must acknowledge that this area is one that the alignment problems may not fully address, thereby highlighting potential challenges in achieving harmonization.)
3. Closing thoughts
Writing this blog post has been somewhat unusual for me, as I never envisioned writing about Artificial Superintelligence, much less about a beneficial future with ASI. Some thoughts that I’m pondering:
The challenge of speed is particularly difficult to address, given the fundamental rapidity of ASI. A synergistic relationship with Artificial Superintelligence might be our only opportunity to help in solving this problem. I never imagined that I could think of a promising future characterized by ASI and human symbiosis - where synergy is a good route. Maybe, a theory of ASI human synergy would be worth exploring in the near future.
Solving the alignment problem is just the beginning; it will lead us to confront other challenges. The journey doesn’t end with finding a solution to the alignment issue. We will need new theories to manage the power and influence that Artificial Superintelligence brings – arguably, this is the most challenging aspect. It necessitates new forms of governance and ways of coexistence.
I feel optimistic about this process of futuristic thinking and writing about it. The exploration has led to the merging of intriguing knowledge clusters in my mind, developed step by step through this journey of curiosity. Overall, it has been a rewarding experience and mental gymnastics at its finest.
Lastly, very little research is done on the topics I covered here. Also, the need for flexible, adaptive frameworks in ASI development. Interdisciplinary research is key to addressing these emerging challenges. If you want to read further here are other some books and blogs that you might find interesting:
A Complete Guide on Artificial Superintelligence by Calibraint (blog)
Superintelligence by Nick Bostrom (book)
Some of the topics I mentioned here were reflections from Scott Alexander’s Meditations on Moloch. (blog)
Lesswrong’s superintelligence tag has some great blog posts like this one.
Thank you for reading. Please consider subscribing to receive notifications of future posts; I plan to publish at least two AI-focused blogs each week.