AI and the problem of infinite complexity

Demi Hassibis and Dario Amodei were interviewed together recently at Davos. Hassibis won a noble prize in 2025 for his involvement with Google DeepMind, using AI to better understand protein structures. This offers potential rewards in treating disease. Amodei leads Anthropic, the AI start up with an apparent emphasis on cooperation and safety.

Amodei speaks with an urgency about the rapid advances of AI and its ability to write its own code. Hassibis is more patient and reflective. He notes the advances in scientific discovery but cautions that it will take time to build into major transformations. Amodei speaks with anxiety about social risks, including employment disruption. Hassibis thinks there will be positive skills adaptations to mitigate labour market upheaval. Hassibis’ longer term AI vision remains astonishing. He claims AI will better understand the mechanics of the universe, leading to advanced space travel.

Utopian dreams of the AI knowledge revolution are peppered with worries about inevitable disasters along the way that are ironically largely unpredictable. This disjunction puts us at the fate of AI events. The problem is exactly how the boundaries between good and evil in the use of AI will play out – but play out they will. Of course, we should work hard to encourage and regulate good AI behaviour and outcomes. But in an imperfect world, it is inevitable some terrible consequences will get through.

I recently heard an ex-army officer talk about the growing use of AI on the battlefield. When asked big questions about the negative future for humanity he was strangely optimistic. Most people are good he extorted. I was left worrying about the few who do not meet this description. Occasionally they will be successful in using AI for bad purposes. And what happens if this is large scale and high impact?

The irony is the paradox between an AI that promises everything and the real-world dangers because AI cannot discern the good from the bad. This reflects the deterministic limits of AI. It will never know everything and there are always boundaries to its knowledge. Like us, it is unable to do the impossible and have a perfect understanding in a world of infinite complexity.

The UN International Telecommunication Union (UNITU) with its program entitled AI for Good, gets close to demonstrating what ethical AI really looks like. This is public service, often focused on health, education, and the environment. What then is AI for bad? Parmy Olson’s account in Supremacy AI Chat GTP and the race that will change the world offers a thesis. It tells of AI research agencies drifting away from  a value driven mission to save humanity. As they run out of research funding, they become dependent upon big tech profitability. Monetisation on a colossal scale threatens bad outcomes.

Eternal complexity is where the relationship between living things and their environment has its future created by relational feedback. These are dynamic patterns across intricate networks that are nonlinear and asymmetrical. They produce constant innovations in distributed emergent and adapting behaviour. In other words, it is impossible to predict both human and AI behaviour entirely based on historical knowledge. Even if you have much data and detailed accounts of the past, this only gives a partial insight about the future. No amount of data and algorithms can capture all the detail of this complex reality. The future has similarities to the past but is not identical.

I experienced the infinite limitation recently while training to use Machine Learning Gradient Boosting. Despite running numerous permutations of data patterns and structures, the boosting has a point of limitation where it cannot improve but instead gets worse. The best solution is only partial. AI will give us some good insights and advice, but we will still have to decide what actions to take.

Philip Haynes