The limits to AI knowledge: the challenge of social complexity

Artificial Intelligence is where advanced information technology and programming like machine learning meets complex data, hopefully to make sense of it and to explain it, and to show what if anything can be done with new knowledge.

Social and economic life is riddled with complexity that defies simple explanation. A society where outcome Y is simply caused by condition X is unlikely. For example,  unemployment (Y)  is rarely simply caused by rising wages (X). More likely,  rising unemployment (Y) is caused by a more complex pattern, like some mix of rising wages (X) and  a lack of productivity (Z). The mix might be different in differing social circumstances. As a result, there is not a consistent, linear relationship between the combination of X and Z that causes Y. Especially when we are trying to compare different countries and regions.

Often, there are further different patterns that cause unemployment (Y), even when there are some shared common features like rising wages (X). For example,  either a combination of rising wages (X) and new technology (A), or a combination of rising wages (X), and rising taxes for the employer (B). In this situation there is a common feature that is necessary (rising wages X), but it alone will not cause rising unemployment unless one of the other features is present.

More likely, we find that variety and complexity can create many  different causal patterns of rising unemployment (Y). Some causal patterns might not even include rising wages (X). There can be alternative causal patterns that in some cases include increases in the import to export ratio (C ), or a pattern that  includes rising input production costs, like the cost of energy(E). No common feature here, but different paths that lead to the same outcome of higher unemployment (Y).

We can imagine and evidence hundreds of different occurrences and combinations that might contribute to rising unemployment, and in different pattern combinations in different social circumstances. Of course, this modelling of complexity becomes potentially more useful at the overall, aggregate level, if the probabilities from all patterns evidence that some conditions are more likely to occur than others, and if in all circumstances these conditions occur in several differing patterns more often. However, there are still likely to be important exceptions (sometimes called ‘outliers’)  where the reasons for rising unemployment do not contain the conditions shown to be of higher probability at the aggregate, overall level.

We need to consider the contingencies of time and space. The complex causal patterns described will change in relation to time and space, as new dynamics and interactions arise over time and affect some countries and regions more than others. These dynamic relationships and how they change are not stable and easily predictable. The patterns we find do not all evolve in the same way, but in diverse ways, and sometimes forming new aggregate more probable underlying causal conditions in the future.

Social and economic science is impacted by these complex relationships and indeterminacies. The natural sciences less so, but still in some circumstances. Tidal systems are simplistic and predictable, but the weather system cannot be predicted with such accuracy, especially over the medium and longer term.

The proponents of AI argue that the ability of AI to perceive and understand complex systems and their patterns will make all these systems better understood and predictable. AI will have a clearer and more accurate sense of complex patterns and how they are evolving than orthodox human understandings.

For certain, AI can run the permutations of such complexity in much quicker and more sophisticated ways than human research analysis. AI is changing the nature of such analysis and synthesis and the role of those who do it. 

Less clear is the extent to which AI can improve the applied usefulness and productivity of such modelling.

What about the added challenge of political and ideological perspectives? Human actors given their moral disposition and cultural view of their own existence, have beliefs and ideas about what is ‘right’ and ‘wrong’, ‘good’ and ‘bad’, regardless of what data patterns reveal. For example, many governments judge that unemployment is a bad occurrence because it has consequences on human self-identity and sense of purpose.

Of course, these aspects can be captured in ‘data’ fed to AI machine learning, through statistics about mental health, poverty, crime etc. But such considerations are a long way from the knowledge of experiencing unemployment, of having known its impact in your own family and community. AI cannot and does not experience these emotions. It only knows what human inputs have told it. It does not experience human and social existence. One’s emotional experience of society and a resulting empathy for others is argued to have the potential to lead to better judgements about how to change policy and behaviour.

AI is not a ‘magical’ view of the social and economic world. It is only as good as the data information it is fed. This continues to be updated and improved. But it cannot reach perfection. There are concerns that ‘more’ is ‘less’ because as larger volumes of limited quality data are repetitively recycled in the AI modelling process, the output becomes redundant.

To see patterns and their trajectories based on historical learning is not a guarantee that these pattern trajectories will remain. The resulting dynamics might be unique transformations, occurrences that are unknowable and not possible to predict. This is why policy and practice decisions will continue to be made by people, not by machines.

AI will assist policy and practice but not solve many real-world complex problems. Sometimes, like civil servants and advisors, AI gives erroneous advice and contributes to bad human decision making.