11 December 2023

Artificial Intelligence for climate security: possibilities and challenges

Recent advances in artificial intelligence (AI)—largely based on machine learning— offer possibilities for addressing climate-related security risks.

They are particularly useful for addressing risks related to climate hazards and risks related to climate vulnerabilities and exposure. Indeed, AI can make disaster early-warning systems and long-term climate hazard modelling more efficient. These improvements can reduce the risk that the impacts of climate change will lead to insecurity and conflict. These tools can also be used to optimize food production and the management of natural resources (e.g. in the form of precision agriculture) in countries where livelihood conditions have deteriorated as a result of climate change; or could facilitate the use of autonomous robots for delivery of humanitarian assistance during climate disasters. There are already ongoing projects that demonstrate these possibilities.

In the case of a third type of risk—climate change-related grievances and tensions— the picture needs to be more nuanced. In theory, AI data-processing capabilities can be used to track grievances and social tensions stemming from exposure to climate change. In practice, however, there are many challenges with using AI to monitor and track the political views and behaviours of a population.

AI presents particular opportunities to address the lack of climate-related data in conflict-affected and fragile countries. These countries are typically among those that are the most exposed to climate hazards and already suffer from environmental degradation exacerbated by climate change. The availability of data in such countries is one of the main obstacles that national governments and international organizations need to tackle head-on when seeking to reduce the risks posed by climate change to peace and security. AI can help policy actors overcome the data problem by enabling efficient use of remote sensing systems, especially satellite imagery, as well as social media.

At the same time, the use of AI for climate security presents technical and ethical challenges. Machine learning, one of the main areas of AI application discussed here, is a powerful technology, but it has important shortcomings. Machine learning algorithms can be opaque and may not explain why and how certain outcomes resulted from a calculation. AI systems powered by machine learning demand a lot of computer resources and can be costly. Their viability and usefulness are likely to depend on the volume and the quality of the data on which they are trained.

For researchers who wish to develop AI tools to empirically study climate-related security risks, these challenges raise difficult methodological questions around the verifiability of outputs, the scalability of the model and, most importantly, decisions about the curation of input data. Ensuring that the system is trained on representative and reliable data is critical to ensuring the usefulness of the system and to minimizing the risk of bias.

Actors who see a great potential in using AI to gather and analyse data related to climate change adaptation and climate change-related grievance monitoring also need to consider associated ethical concerns. These include the need to bear in mind that an over-reliance on such systems may lead to policy interventions that discriminate against certain social groups—particularly those that do not engage with social media for economic, social or political reasons. The possibility that automated social monitoring tools could be misused by authoritarian regimes must also be acknowledged. More generally, the use of such tools could undermine human rights, not least people’s rights to privacy and the right not to be discriminated against.

Based on these key findings, policymakers interested in further exploring the potential of AI for climate security should support critical research on AI and climate security. They should also support access to digital infrastructure and digital literacy, and the development of an open-access AI tool for the collection of climate security-related data in conflict-affected and fragile countries.

Researchers who wish to make use of AI to better understand the nexus between climate change and security should conduct empirical research and explore methodological questions. They should also consider ethical and political risks associated with the use of AI methods to monitor climate change-related tensions and political grievances. In all this, they should maintain close links with affected communities

The summary above is an excerpt from the full report authored by Dr Kyungmee Kim and Dr Vincent Boulanin. It was originally published by the Stockholm International Peace Research Institute (SIPRI). The full version can be viewed using the link here.

Image created by AI technology provided by Microsoft Bing