3.8 Proceedings Paper

On the Importance of Building High-quality Training Datasets for Neural Code Search

Publisher

IEEE COMPUTER SOC
DOI: 10.1145/3510003.3510160

Keywords

Code search; dataset; data cleaning; deep learning

Ask authors/readers for more resources

The performance of neural code search heavily depends on the quality of the training data. Most existing code search datasets suffer from low quality and noisy queries, resulting in degraded performance in real-world applications. To address this issue, we propose a data cleaning framework consisting of two filters to improve the dataset quality and ensure semantic consistency of the queries. Experimental results demonstrate that training with the filtered dataset significantly improves the performance of neural code search models.
The performance of neural code search is significantly influenced by the quality of the training data from which the neural models are derived. A large corpus of high-quality query and code pairs is demanded to establish a precise mapping from the natural language to the programming language. Due to the limited availability, most widely-used code search datasets are established with compromise, such as using code comments as a replacement of queries. Our empirical study on a famous code search dataset reveals that over one-third of its queries contain noises that make them deviate from natural user queries. Models trained through noisy data are faced with severe performance degradation when applied in real-world scenarios. To improve the dataset quality and make the queries of its samples semantically identical to real user queries is critical for the practical usability of neural code search. In this paper, we propose a data cleaning framework consisting of two subsequent filters: a rule-based syntactic filter and a model-based semantic filter. This is the first framework that applies semantic query cleaning to code search datasets. Experimentally, we evaluated the effectiveness of our framework on two widely-used code search models and three manually-annotated code retrieval benchmarks. Training the popular DeepCS model with the filtered dataset from our framework improves its performance by 19.2% MRR and 21.3% Answer@1, on average with the three validation benchmarks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available