4.7 Article

Crowdsourcing Experiment and Fully Convolutional Neural Networks for Coastal Remote Sensing of Seagrass and Macroalgae

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTARS.2023.3312820

Keywords

Convolutional neural network (CNN); crowdsourcing; deep learning (DL); multispectral; remote sensing

Ask authors/readers for more resources

This article assessed the reliability of crowdsourced labels for estuarine vegetation and unvegetated sediment, and found that the accuracy of the labels was influenced by the expertise and familiarity of the participants. The results also confirmed that biases in participant annotation were propagated in the performance of the deep learning models. Additionally, it was shown that combining in situ and crowdsourced labels improved the performance of the models compared to using only in situ labels.
Recently, convolutional neural networks and fully convolutional neural networks (FCNs) have been successfully used for monitoring coastal marine ecosystems, in particular vegetation. However, even with recent advances in computational modeling and data acquisition, deep learning models require substantial amounts of good quality reference data to effectively self-learn internal representations of input imagery. The classical approach for coastal mapping requires experts to transcribe in situ records and delineate polygons from high-resolution imagery such that FCNs can self-learn. However, labeling by a single individual limits the training data, whereas crowdsourcing labels can increase the volume of training data, but may compromise label quality and consistency. In this article, we assessed the reliability of crowdsourced labels on a complex multiclass problem domain for estuarine vegetation and unvegetated sediment. An interobserver variability experiment was conducted in order to assess the statistical differences in crowdsourced annotations for plant species and sediment. The participants were grouped based on their discipline and level of expertise, and the statistical differences were evaluated using Cochran's Q-test and the annotation accuracy of each group to determine observation biases. Given the crowdsourced labels, FCNs were trained with majority-vote annotations from each group to check whether observation biases were propagated to FCN performance. Two scenarios were examined: first, a direct comparison of FCNs trained with transcribed in situ labels and crowdsourced labels from each group was established. Then, transcribed in situ labels were supplemented with crowdsourced labels to investigate the feasibility of training FCNs with crowdsourced labels in coastal mapping applications. We show that annotations sourced from discipline experts (ecologists and geomorphologists) familiar with the study site were more accurate than experts with no prior knowledge of the site and nonexperts, with our results confirming that biases in participant annotation were propagated in FCN performance. Furthermore, FCNs trained with a combined dataset of in situ and crowdsourced labels performed better than FCNs trained on the same imagery with in situ labels.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available