In this video, I explore the potential paths to an existential crisis brought on by advancements in artificial general intelligence (AGI) and artificial super intelligence (ASI). Using a Bayesian mindset, I connect current AI developments like GPT, Bard Auto GPT, and others to potential future scenarios. I begin by discussing the concept of a "welcomed extortion period," where humans gradually cede control to AI systems in exchange for convenience. I envision a future where AGI and ASI develop into a new form of society, running on servers worldwide, shaping a marketplace of competing goals. The vast majority of these may not be human-driven, leading to an increasing reliance on AI for understanding our data, making predictions, providing services - all while monetizing these services behind paywalls. I further illustrate this concept with a practical example involving the use of AI in everyday life, like planning a dinner party. The AI helps in connecting with friends, understanding their preferences, and even sourcing the ingredients. This future paints a picture of a society that has surrendered most of its control in exchange for convenience, effectively building an AI cage around itself. The potential implications of this rise in AI dependency are vast, particularly as we consider the potential for mental domestication of younger generations who grow up with AI. This could lead to an erosion of freedom and an increase in the preference for security, which historically has led to adverse societal outcomes. I also discuss the challenges of identity verification in a world dominated by AI-generated content and the potential role of blockchain solutions in mitigating these issues. However, the threat of quantum computing could potentially disrupt these solutions, creating more uncertainty. Finally, I explore the potential existential risks associated with AI, particularly as it intersects with biology. The increasing ability of AI to understand and manipulate biological systems might lead to beneficial developments like disease eradication or environmental repair. However, there's a risk that this could spiral out of control, leading to a biological crisis. In conclusion, this video is an exploration of the potential paths to an existential crisis caused by AGI and ASI. It highlights the importance of considering the long-term implications of AI development and the challenges we may face in maintaining control over these powerful systems.