How Disruptive is AI?
My general ideas on AI complexity, alignment, regulation, psychopaths and optimism.
AI? Is it just a hype or something more?
The potential of AI to disrupt various industries and job markets is a topic of extensive debate and research. It's important to recognize that the impact of AI is not monolithic; it ranges from enhancing productivity and safety in workplaces to posing challenges such as job displacement. For instance, advancements in AI have automated tasks that were previously hazardous, thereby improving workplace safety. However, this automation also raises concerns about job security, as roles traditionally performed by humans are increasingly fulfilled by machines. This dichotomy underscores the need for a nuanced understanding of AI's role in our future workforce.
The Role of Technological Job Displacement in the Future of Work: This article discusses the impact of technological advancements on job displacement. It highlights the alteration of the number, quality, and stability of jobs due to technological advancements, including digitalization, robotics, AI, and advanced computing. These innovations can lead to the automation of unsafe tasks or reduction of hazards but also result in job displacement when human workers are replaced by technology. The article emphasizes that almost half of workers have automation-susceptible jobs and not all displaced workers may find new jobs
Understanding the impact of automation on workers, jobs, and wages: This source provides a comprehensive overview of the impact of automation on workers, jobs, and wages. It notes that while automation can create new jobs, it can also lead to job displacement, particularly in sectors involving routine tasks. New jobs tend to pay well for highly educated analytical workers but often pay lower wages in personal services sectors. The article also discusses the importance of education and training in adapting to these changes, emphasizing the need for workers to gain more education and training to become more complementary with machines.
Is artificial intelligence poised to disrupt your industry?: AI is revolutionizing diverse industries like healthcare, law, journalism, aerospace, and manufacturing. This transformation is driven by increased processing power, reduced costs, advancements in cloud computing, and improvements in AI capabilities like voice and pattern recognition. Deloitte anticipates a significant increase in commercial AI applications across product, process, and insight categories, which will necessitate businesses to evaluate their processes, staffing models, data assets, and markets for AI integration.
How past tech disruptions can help inform the economic impact of AI: GenAI (generative artificial intelligence) systems are expected to substantially impact organizations, economies, and societies, but predicting their exact effects remains challenging. Key lessons from historical technological changes suggest that GenAI will likely boost productivity, with a potential delay in impact. AI is set to cause labor market disruptions by automating tasks and displacing workers, but it will also create new job types and roles across many sectors
These are just four out of the thousands of articles that has been written on the topic of AI as a disruptive technological force especially when ChatGPT got deployed in November 2022. Personally, this sorts of disruptions doesn’t concern me or what I’m not worried about the most - I just think that the disruptive nature of AI is far beyond the reach of our current understanding. Let me explain in the next section.
What worries me more?
Turning our attention to the AI alignment problem, my journey into this field began in October 2022. Fascinated by effective altruism, I stumbled upon a blog post discussing the alignment awards. This piqued my curiosity about the significant investment in developing shutdown mechanisms for robots and ensuring their goal alignment with human values. The complexity of these topics, from goal-directedness to corrigibility, highlights the nascent stage of our understanding of AI alignment. Despite extensive discussions in forums like LessWrong and contributions from leading researchers, a definitive solution to the alignment problem remains elusive. It's a challenge that requires continuous exploration and interdisciplinary collaboration.
On a sidenote: There are bad actors in the world we live in and we should not hand them powerful toys to play with.
(This is the most plausible outcome of open source models being readily accessible - a single psychopath can deal tremendous harm using such powerful technologies. Enabling a disastrous event to occur, this will negate all the benefits that AI tech brings. I will not explain how, as a stupid person might be able to use the theory but nevertheless a tool that is not safe.1)
On another sidenote: We have a big coordination problem.
Currently, a handful of major laboratories, including Hugging Face, dominate the AI landscape. Governments, unfortunately, lag in implementing regulatory frameworks to oversee these entities. A case in point is the EU AI Act, which won't come into effect until 2026 – a delay that leaves current rapid advancements in private AI labs largely unregulated. Meanwhile, various nation-states conduct their own AI research, often lacking the necessary international coordination to mitigate risks, such as running potentially harmful training models. This situation underscores a stark reality: our traditional systems of governance struggle to keep pace with the rapid evolution of AI technology. In an era that demands urgent solutions, the critical discourse around AI is not receiving the attention it warrants. AI technology is unique in its potential to revolutionize not just specific sectors, but the very fabric of our daily lives and societal structures.
It feels like most are sleeping on the problems I mentioned here.
Grasping the essence of artificial intelligence is an inherently complex endeavor. It demands a profound understanding across multiple disciplines, making it a daunting task for many. This complexity often leads to a default position where most people accept prevailing narratives about AI without deeper inquiry. My concern, however, lies in the fact that our scientific grasp on controlling and steering this technology remains in its infancy. As a result, there's a collective tendency to overlook the intricacies of AI. The average individual, confronted with the daunting task of unraveling these complexities, might choose to simply align with the majority consensus. Yet, this approach can lead to a lack of critical engagement with a technology that is rapidly reshaping our world.
I end this post with optimism.
I fully recognize the allure of comfort and the ease with which we can choose to disregard the long-term future of humanity. However, this is a path I consciously choose not to tread. In 2024, just as I did in 2023, I am committed to taking proactive steps towards shaping a better future, both for humanity at large and for the individuals I can impact within my lifetime. How will I do this? I am driven by the belief that if AI has the power to transform our world, it also holds the key to solving the alignment problem. By steering AI in a direction that aligns with our collective values and goals, we can harness its full potential. I am optimistic that the latest advancements in AI will foster environments conducive to these endeavors. My aim is to make a meaningful impact in the realm of AI, potentially helping nurture a movement where more individuals join in tackling the coordination challenges I've highlighted. I believe we can navigate these complexities and steer our future towards a more harmonious and aligned coexistence with AI.