Artifact Evaluation
Motivation
Empirical evidence is important to produce long-lasting impact research. The computation tools that are used to produce or validate the research are not nearly given as much attention as they should. To invert this tendency, some communities have started a process called Artifact Evaluation (AE), that rewards well written tools that allows researchers to replicate the experiments presented in papers. The purpose of the AE process is mainly to improve the reproducibility of computational results. ECRTS 2016 has been the first real-time systems conference to introduce the artifact evaluation process, then continued in 2017. In 2018, the process will be repeated.
Authors of accepted papers with a computational component will be invited to submit their code and/or their data to an optional repeatability evaluation or artifact evaluation (AE) process. We seek to achieve the benefits of the AE process without disturbing the current process through which ECRTS has generate high-quality programs in the past.
Therefore, the current submission, review and acceptance procedure would be completely unaltered by the decision of running an AE process. Only once acceptance decisions are final, the authors of accepted papers will be invited to submit a artifact evaluation (or replication) package. The optional repeatability evaluation process has no impact on whether a paper is accepted at ECRTS. Moreover, there will be no disclosure of the title and authors of papers which would not pass the repeatability evaluation.
The authors of the papers corresponding to the artifacts which pass the evaluation can decide to use a seal that indicates that the artifact has passed the repeatability test.
Artifacts should include two components: (a) a document explaining how to use the artifact and which of the experiments presented in the paper are repeatable (with reference to specific digits, figures and tables in the paper), the system requirements and instructions for installing and using the artifact; (b) the software and any accompanying data. A good how-to to prepare an artifact evaluation package is available online at http://bit.ly/HOWTO-AEC.
We recognize that not all the results are repeatable. In some cases, the execution time of the experiments is too long. In some other cases, one would need a complete infrastructure to execute the tests. We encourage submissions but we will try to repeat only results that are reasonably repeatable with regular computing resources. For the first edition of the process, we argue that the focus should be on (1) replicating the tests that are repeatable, (2) improving the repeatability infrastructure so that more tests would become repeatable in the future.
The artifact submitted is treated as confidential and will not be released. The hope, here, is that since the authors are already working on packaging the material for the submission, they are encouraged to release the artifact as well. This is however not mandatory and it is up to the authors to decide. The evaluation process is single-blind. The evaluation process is non-competitive. The acceptance of the papers has already been decided and the hope is that all the artifacts submitted would pass the evaluation criteria.
Timeline
The submission deadline is April 7th. During the first three weeks, the evaluators will try to execute the code and communicate with the authors via the chairs, in case of problems in executing the code. Then the evaluators will have additional time to evaluate the reproducibility of the results. Response to authors are going to be communicated on May 15th. The seal of approval will be added to the camera ready version of the paper by the organizers.
Submission Process
To submit an artifact, use the easychair submission website:
https://easychair.org/conferences/?conf=ecrtsae18
and specify the following fields:
- Authors: same as for the (already accepted) paper,
- Title: same as for the (already accepted) paper,
- Abstract: same as for the (already accepted) paper,
- Keywords: use this field to specify the url at which we can find the information about the artifact and the artifact code (which should be hosted by authors), you should specify at least three keywords (among these there must be a file containing instructions for the evaluators, if not all three lines are needed fill three lines of keywords to enable the submission form), examples are:
- instructions: url of instruction file (for example a pdf file or a txt file),
- code: link to the code (for example a zip file or a repository),
- vm: link to a virtual machine image (if present),
- docker: link to a docker file (if present),
- Paper: include the PDF version of your paper (the instruction file should specify which results in the paper are reproducible, to give the artifact evaluators the possibility to check them).
For additional questions contact the Artifact Evaluation Chair, Martina Maggio (martina.maggio@control.lth.se)