Search

Home Actualités Flora, Fauna and Data: How to Stem an Artificial Disaster
7 Jul 2025 Events

Antoine Glory

Project Coordinator

Flora, Fauna and Data: How to Stem an Artificial Disaster

MONTREAL, July 07, 2025 – In this blog post, cross-published with the IEEE Technology Center for Climate (ITCC), Antoine Glory, Machine Learning Engineer and AI Project Coordinator at CEIMIA, examines the entanglements of biodiversity and Artificial Intelligence, acknowledges the prospects it brings about, and advocates for biodiversity-focused AI strategies.

You don’t see what you’re losing. Environmental scientists often talk about what they term the sixth mass extinction, but why should we care? Building on a joint IEEE-Global Partnership on AI (GPAI) webinar on AI and biodiversity and GPAI’s 2022 AI and Biodiversity report, we attempt to respond to this thorny question and take a moment to reflect on the effect of the latest tech on the world we know, its plants, animals, and humans.

Well, the short answer is, all species on earth together are responsible for the proper functioning of the ecosystem. And unlike carbon, which is very measurable, ecosystems are complex to capture; as proposed by  Lovelock’s Gaia Hypothesis, they rely on biodiversity for interactions, cooperative effects, and self-regulation. What’s more, what we call the “terrestrial bias” isn’t helping: oceans hold the most biodiversity but the unknowns about marine biodiversity are perhaps the greatest of any part of the global ecosystem. That is because many measurement techniques we use on land don’t work underwater, and yet we need more insights on how life below the waves interacts with life on land if we want to preserve the natural world. ‘AI can be a wonderful tool to achieve this’ highlights Maik Schwalm from CapGemini in the webinar.

When biodiversity roots for AI

AI is increasingly used for predicting or inferring various derived measurements required to study, report, and manage biodiversity change across space and time. Computer vision algorithms, for example, are used for automatic species identification in camera trap imagery. Edge computing is a particularly promising technique to mitigate environmental impacts by processing data where the sensor is, allowing us to go where biodiversity is without physical human interference, flags Raja Chatila of Sorbonne Université. Some models can also help infer data where and when it is sparse, others are already widely used to monitor land and sea-use change, invasive plant species, climate change, pollution sources and many more. Additionally, rather than traditional statistical methods such as regression, we are now seeing progress made towards using deep learning for prediction tasks in biodiversity. Major advances are also expected from the growing use of knowledge-guided ML and explainable AI, which could help uncover underlying ecological mechanisms of biodiversity. The convention on biodiversity also extensively mentions AI as a vector of conservation efforts.

Footprint and policy hurdles

In a previous blog post about GenAI’s impact on the environment, we looked at carbon footprint, water usage and other impacts due to GenAI’s over-deployment, which were also touched upon in the webinar. “Generative AI is a tiny fraction of AI being deployed or considered for biodiversity and conservation – while having the biggest environmental impact within the AI world”, points out Tanya Berger-Wolf of The Ohio State University. She adds that: “Gen AI has barely shown its usefulness in biodiversity monitoring and understanding the drivers of biodiversity loss”. In practice we don’t necessarily need AI, let alone huge models like we are seeing now and can very well do with smaller more specialized models such as MegaDetector for camera trap imagery, warns Daniele Silvestro at University of Fribourg. And beyond this, AI is hogging a lot of investment at the cost of other fields, often for the sake of AI itself rather than the benefits brought by its applications.

The field of biodiversity also lacks data governance policies, and a long term strategy, which are very local at best: there is no global consensus on which species data should be protected, for example. And there is no global effort and cooperation yet despite a recent groundswell need for educational resources and opportunities also for policy makers rather than just for biologists, conservation practitioners and biodiversity managers. Indeed, AI can improve the flow-through of ecological knowledge to implementation of conservation strategies (e.g. 2030 Kunming-Montreal Global Biodiversity Framework), and tailor analyses and scenarios to specific conservation questions from academia, government, NGOs and industry, a recent Nature paper argues.

Biodiversity disparity and data asymmetry

Geographical availability of data doesn’t necessarily correlate to how biodiverse the geography is. Over the last 20 years a variety of new technologies have brought unprecedented amounts of data about the natural world, “most biodiversity data is in most developed economies and much less where biodiversity hotspots are,” raises Christopher Whitt at Lloyd’s Register Foundation. This highlights the need for greater global data sharing: The lack of effective contact points for data exchange has largely been attributed to the lack of skills, infrastructure, and cultural barriers.

One type of initiatives that can help overcome these challenges are frameworks such as the Global Biodiversity Research Facility which aims to provide open access to data on all forms of life on Earth, available to anyone, anywhere. This technological asymmetry also includes a geographic imbalance in computing resources and models, which are crucial to make sense of all the raw data captured. But because this data has both spatial and temporal localness, the models trained to process it work better under similar conditions. The concept of distribution drift – i.e. when the data a model sees in the real world differs from the data it was trained on – is especially relevant in the context of climate change. This is because “what works today, might not work tomorrow”, notes Antoine Gagné-Turcotte at Whale Seeker, citing new temperatures, natural disasters or changing locations of animals.

Towards the best of both worlds

The webinar discussion highlighted the general consensus among panelists that developing specific technological solutions for biodiversity context has to be a focus. A balance must be reached between developing general use models and fine tuning them for a specific ecological context, as well as developing small, highly specialized solutions, argues Berger-Wolf. GPAI’s 2022 AI and Biodiversity report suggests comprehensive and curated expert recommendations for doing so, such as moving towards real-time monitoring and prediction, e.g. AI processing on board satellites to enable rapid disaster response or government-mandated supply chain data openness for high-risk drivers of biodiversity loss. It also highlights the need for much greater join-up globally between researchers, particularly on matters like sharing data, giving a key role to the small community of passionate AI and biodiversity researchers. 

 

Antoine Glory, Machine Learning Engineer and AI Project Coordinator at CEIMIA – Montreal International Centre of Expertise in Artificial Intelligence.

BG

Meet the CEIMIA team

Through a unique collaborative structure, CEIMIA is a key player in the development, funding and implementation of applied AI projects for the benefit of humanity.

Équipe