Ecologist Kaitlyn Gaynor, a current postdoctoral researcher at UC Santa Barbara’s National Center for Ecological Analysis and Synthesis, recently coauthored the research paperIterative human and automated identification of wildlife images,” which explores the role of artificial intelligence, or AI, in the processing of data collected from camera traps. The data analyzed in this study was taken from a long-term camera trap project in Gorongosa National Park in Mozambique and includes various photos of local flora and fauna. 

The front of a roaming elephant captured by one of the researcher’s camera traps. Courtesy of Kaitlyn Gaynor

The study looks at how AI and human scientists can work together to improve both efficiency and accuracy in the processing of photographic data. It is anticipated that this “synergistic collaboration” will yield high-confidence conclusions due to the checking of each other’s work, ultimately leading to better quality results than either could provide on their own.

The question of how to improve data analysis has been a major point of interest. “I have been using camera traps to study the recovery of large mammals in Mozambique’s Gorongosa National Park since 2016,” Gaynor said. “And processing the photos has always been a big bottleneck.” The paper discusses how one of the biggest challenges to long-term camera trap monitoring is the sheer amount of human labor needed to annotate and analyze the camera trap data. 

This is the second paper that Gaynor’s colleague Zhongqi Miao, a doctoral candidate at UC Berkeley, has led using this set of data. The first paper, published in 2019, looked at a deep learning algorithm that assigns importance to different aspects of each image in order to identify the object present in the photo. Deep learning is a type of AI that mimics methods of human knowledge acquisition, proving especially useful for when visual stimulus recognition is involved. “Interestingly, this process revealed that the computer vision algorithm often clued in on different visual features of species than humans would, which improved its accuracy for cases when only part of the animal was visible,” Gaynor remarked.

The new paper contributes significantly to what is already known about AI and its implications for use in ecological monitoring through the development of a system that “integrates human expertise into the machine learning loop,” according to Gaynor. This type of approach has never been taken before, and she is excited about its implications for the future of wildlife research. 

The algorithm assigns a confidence level to each classification, and low confidence predictions are flagged for further review,” Gaynor said. “Subsequent identifications are then used to retrain the model. This ultimately results in a much lower amount of human labor than would be required for manual identification of every image, but a higher accuracy than could be achieved from the machine learning algorithm alone.” 

For this paper, Gaynor and her colleague Meredith Palmer, a postdoctoral researcher at Princeton University, were responsible for the data collection in the field and had to look at the 20% of the photos that had been classified by the AI algorithm as “low confidence” in order to correct any photos that had been misidentified. “This saved us significant time and resulted in high accuracy — around 90% of the images were classified correctly,” Gaynor said. “Accuracy remains lower for the rare species, and we hope to get these numbers up higher with the addition of more data from future seasons and continued fine-tuning of the model.”

A lion and cub captured by one of the researcher’s camera traps. Courtesy of Kaitlyn Gaynor

When asked about the most interesting or surprising findings of the research, Gaynor said, “I was surprised at how accurate the computer vision algorithm was in cases when only part of the animal was visible, or otherwise hard to see. When we were testing this model on data that had already been classified, I took a closer look at some of the ‘incorrect’ model classifications and found that the model was actually correct, and our initial classifications were wrong! The model often did a better job than we did.”

Regarding the future of artificial intelligence in wildlife monitoring and conservation, Gaynor said that “machine learning methods will be critical for keeping pace with data processing and in the translation of raw data into usable information that can inform science and conservation.”

The next step for Gaynor and her colleagues is to take the algorithm and translate it into simple and usable software that is accessible to researchers in the field, specifically in Gorongosa National Park. Additionally, Gaynor’s team is working on getting “citizen scientists” and other volunteers around the world to help identify animals in camera trap data and incorporating that into the AI-human collaborative approach. They hope to both provide a unique learning opportunity for the volunteers, as well as to create “a workflow that is fast, efficient and educational.”

Print