Journal
INFORMATION RETRIEVAL
Volume 12, Issue 1, Pages 81-97Publisher
SPRINGER
DOI: 10.1007/s10791-008-9072-x
Keywords
Reference standards; Evaluation; Inter-annotator agreement; Text mining; Information retrieval
Categories
Ask authors/readers for more resources
With the help of a team of expert biologist judges, the TREC Genomics track has generated four large sets of gold standard'' test collections, comprised of over a hundred unique topics, two kinds of ad hoc retrieval tasks, and their corresponding relevance judgments. Over the years of the track, increasingly complex tasks necessitated the creation of judging tools and training guidelines to accommodate teams of part-time short-term workers from a variety of specialized biological scientific backgrounds, and to address consistency and reproducibility of the assessment process. Important lessons were learned about factors that influenced the utility of the test collections including topic design, annotations provided by judges, methods used for identifying and training judges, and providing a central moderator meta-judge''.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available