License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ECOOP.2023.10
URN: urn:nbn:de:0030-drops-182037
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2023/18203/
Go to the corresponding LIPIcs Volume Portal


Dietrich, Jens ; Pearce, David J. ; Chandramohan, Mahin

On Leveraging Tests to Infer Nullable Annotations

pdf-format:
LIPIcs-ECOOP-2023-10.pdf (0.8 MB)


Abstract

Issues related to the dereferencing of null pointers are a pervasive and widely studied problem, and numerous static analyses have been proposed for this purpose. These are typically based on dataflow analysis, and take advantage of annotations indicating whether a type is nullable or not. The presence of such annotations can significantly improve the accuracy of null checkers. However, most code found in the wild is not annotated, and tools must fall back on default assumptions, leading to both false positives and false negatives. Manually annotating code is a laborious task and requires deep knowledge of how a program interacts with clients and components.
We propose to infer nullable annotations from an analysis of existing test cases. For this purpose, we execute instrumented tests and capture nullable API interactions. Those recorded interactions are then refined (santitised and propagated) in order to improve their precision and recall. We evaluate our approach on seven projects from the spring ecosystems and two google projects which have been extensively manually annotated with thousands of @Nullable annotations. We find that our approach has a high precision, and can find around half of the existing @Nullable annotations. This suggests that the method proposed is useful to mechanise a significant part of the very labour-intensive annotation task.

BibTeX - Entry

@InProceedings{dietrich_et_al:LIPIcs.ECOOP.2023.10,
  author =	{Dietrich, Jens and Pearce, David J. and Chandramohan, Mahin},
  title =	{{On Leveraging Tests to Infer Nullable Annotations}},
  booktitle =	{37th European Conference on Object-Oriented Programming (ECOOP 2023)},
  pages =	{10:1--10:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-281-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{263},
  editor =	{Ali, Karim and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2023/18203},
  URN =		{urn:nbn:de:0030-drops-182037},
  doi =		{10.4230/LIPIcs.ECOOP.2023.10},
  annote =	{Keywords: null analysis, null safety, testing, program analysis}
}

Keywords: null analysis, null safety, testing, program analysis
Collection: 37th European Conference on Object-Oriented Programming (ECOOP 2023)
Issue Date: 2023
Date of publication: 11.07.2023
Supplementary Material: Software (Source Code): https://github.com/jensdietrich/null-annotation-inference archived at: https://archive.softwareheritage.org/swh:1:dir:af57d8b58579b09bdab080493b944d0a325821ed


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI