(The ideas in this post deeply bothers me…)
Singapore’s proposal for generative AI
Quoted from Proposed Model AI Governance Framework for Generative AI:
“AI, as a whole, is a technology that has been developing over the years. Prior development and deployment is sometimes termed traditional AI. To lay the groundwork to promote the responsible use of traditional AI, Singapore released the first version of the Model AI Governance Framework in 2019, and updated it subsequently in 2020. The recent advent generative AI2 has reinforced some of the same AI risks (e.g. bias, misuse, lack of explainability), and introduced new ones (e.g. hallucination, copyright infringement, value alignment). These concerns were highlighted in our earlier Discussion Paper on Generative AI: Implications for Trust and Governance,3 issued in June 2023. The discussions and feedback have been instructive.”
AI Verify Foundation's Proposal represents a significant advancement in addressing the unique challenges posed by this emergent technology. This framework particularly focuses on the intricate issue of the alignment problem in generative AI, a concern that was not as prevalent in traditional AI applications.
Recognizing the amplified risks associated with generative AI, such as bias, misuse, and lack of explainability – problems already identified in traditional AI – the AI Verify Foundation’s framework also addresses the novel challenges unique to generative AI. These include issues like hallucination, copyright infringement, and the critical need for value alignment, as highlighted in the Discussion Paper on Generative AI: Implications for Trust and Governance, published in June 2023.
The framework acknowledges that alignment research is not just an adjunct but a fundamental component in understanding and responsibly deploying generative AI. It aims to bridge the gap between the rapidly evolving capabilities of generative AI and the ethical, legal, and social implications these advancements entail. By doing so, it sets a precedent in AI governance, marking a proactive approach to managing emerging technologies responsibly.
Quoted: “Safety and Alignment Research & Development (R&D) – The state-of-the-science today for model safety does not fully cover all risks. Accelerated investment in R&D is required to improve model alignment with human intention and values.Global cooperation among AI safety R&D institutes will be critical to optimise limited resources for maximum impact, and keep pace with commercially driven growth in model capabilities.”
The insightful perspective offered in this paper illuminates a crucial gap in existing AI governance frameworks, such as those proposed by ISOs, OECD (creators of the proposed EU AI Act), and the US AI NIST framework. These frameworks, while comprehensive in many respects, have often overlooked the nuanced complexities of the alignment problem, especially as it pertains to generative AI. This omission raises an important question: why is the alignment problem, so central to the responsible development and deployment of generative AI, not being adequately addressed in these established frameworks?
Understanding the Underrecognition of the AI Alignment Problem
A Collective Awareness Yet a Collective Oversight
While it's reassuring to know that entities like the AI Verify Foundation and communities such as Less Wrong are acknowledging the AI alignment problem, it's perplexing why this critical issue hasn't gained more traction among governments and institutions. My reflections lead to several hypotheses:
The Complexity and Dual Nature of AI: AI, often perceived as both fascinating and intimidating, presents a significant cognitive challenge. Its intricate nature makes it difficult for those outside the field to grasp its nuances fully. This complexity often leads to a dichotomous view of AI as either a technological marvel or a potential threat, overshadowing nuanced issues like AI alignment.
Communication Gap in AI Understanding: Explaining the intricacies of generative AI to a non-specialist audience is a daunting task. The science behind it is complex and continuously evolving, requiring not just technical understanding but also exceptional communication skills. This communication gap makes it challenging to convey the urgency and importance of aligning AI with human values and intentions.
Limited Focus on Alignment Issues: In my experience working in the Philippines, I've observed a scarcity of professionals who are dedicating their efforts to understanding and solving the AI alignment problem. This lack of focused expertise is a significant barrier to advancing the dialogue and action on this issue.
Competing Global Challenges: The world is currently grappling with numerous pressing issues, from geopolitical conflicts like the wars in the Middle East and the Russia-Ukraine situation to the global climate crisis. These urgent and visible problems often take precedence in public and policy discourse, leaving less immediate but equally important issues like AI alignment in the background.
Conclusion: The Perils of Neglecting AI Alignment Research
In the dynamic landscape of AI development, overlooking the significance of alignment research is akin to setting a digital time bomb. The plethora of existing frameworks aiming to establish AI safety standards is commendable, yet they fall short if they fail to adequately address the misalignment issues prevalent in current generative AI systems. Without the critical lens of alignment research, we risk endorsing AI technologies that, though advanced, might be misaligned with societal values and human ethics.
This misalignment not only undermines the potential benefits of AI but also poses a tangible threat to societal well-being. Imagine a future where AI systems, integrated into our daily lives, operate on principles that deviate from human intentions and moral standards. The consequences could range from minor inconveniences to catastrophic failures, affecting everything from personal data security to critical decision-making in sectors like healthcare, finance, and public safety.
Therefore, as we stride forward in the AI era, it is important to not only build AI that is powerful and efficient but also aligned with the fundamental values and ethics of humanity. Ignoring alignment research is not just a missed opportunity; it is a gamble with our societal fabric. The need of the hour is not just to create AI but to steer it in the right direction for the benefit of all.