Journal
PROCEEDINGS OF THE 41ST ACM SIGPLAN CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '20)
Volume -, Issue -, Pages 91-105Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3385412.3385997
Keywords
type inference; structured learning; deep learning; graph neural networks; meta-learning
Funding
- EPSRC [EP/J017515/1]
- EPSRC [EP/P005659/1] Funding Source: UKRI
Ask authors/readers for more resources
Type inference over partial contexts in dynamically typed languages is challenging. In this work, we present a graph neural network model that predicts types by probabilistically reasoning over a program's structure, names, and patterns. The network uses deep similarity learning to learn a TypeSpace- a continuous relaxation of the discrete space of types - and how to embed the type properties of a symbol ( i.e. identifier) into it. Importantly, our model can employ one-shot learning to predict an open vocabulary of types, including rare and user-defined ones. We realise our approach in TYPILUS for Python that combines the TypeSpace with an optional type checker. We show that TYPILUS accurately predicts types. TYPILUS confidently predicts types for 70% of all annotatable symbols; when it predicts a type, that type optionally type checks 95% of the time. TYPILUS can also find incorrect type annotations; two important and popular open source libraries, fairseq and allennlp, accepted our pull requests that fixed the annotation errors TYPILUS discovered.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available