4.7 Article

Automatic Identification of Individual Primates with Deep Learning Techniques

Journal

ISCIENCE
Volume 23, Issue 8, Pages -

Publisher

CELL PRESS
DOI: 10.1016/j.isci.2020.101412

Keywords

-

Funding

  1. National Natural Science Foundation of China [31872247, 31672301, 31270441, 31730104, 61973250]
  2. Strategic Priority Research Program of the Chinese Academy of Sciences [XDB31000000]
  3. Natural Science foundation of Shaanxi Province in China [2018JC-022, 2016JZ009]
  4. National Key Programme of Research and Development, Ministry of Science and Technology [2016YFC0503200]
  5. Shaanxi Science and Technology Innovation Team Support Project [2018TD-026]
  6. Shenzhen Safari Park
  7. Shaanxi Academy of Forestry

Ask authors/readers for more resources

The difficulty of obtaining reliable individual identification of animals has limited researcher's ability to obtain quantitative data to address important ecological, behavioral, and conservation questions. Traditional marking methods placed animals at undue risk. Machine learning approaches for identifying species through analysis of animal images has been proved to be successful. But for many questions, there needs a tool to identify not only species but also individuals. Here, we introduce a system developed specifically for automated face detection and individual identification with deep learning methods using both videos and still-framed images that can be reliably used for multiple species. The system was trained and tested with a dataset containing 102,399 images of 1,040 individuals across 41 primate species whose individual identity was known and 6,562 images of 91 individuals across four carnivore species. For primates, the system correctly identified individuals 94.1% of the time and could process 31 facial images per second.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available