The ECRTS 2019 artifacts are published as issue 1 in the volume 5 of the Dagstuhl Artifacts Series (DARTS) at:
http://drops.dagstuhl.de/darts/
Motivation
Empirical evidence is important to produce long-lasting impact research. We feel that the tools and experiments that are used to produce or validate research results are not given as much attention as they should. To counteract this tendency, some communities have started a process called Artifact Evaluation (AE), that rewards well written tools allowing researchers to replicate the experiments presented in papers. The purpose of the AE process is mainly to improve the reproducibility of computational results.
ECRTS has been the first real-time systems conference to introduce artifact evaluation in 2016, then continued in 2017 and 2018. In 2019, the process will be repeated.
Authors of accepted papers with a computational component will be invited to submit their code and/or their data to an optional AE process. We seek to achieve the benefits of the AE process without disturbing the current process through which ECRTS has generated high-quality programs in the past. In particular, the decision to submit or not an artifact has no impact on whether a paper is accepted at ECRTS. Moreover, there will be no disclosure of the title and authors of papers which would not pass the repeatability evaluation.
The authors of the papers corresponding to the artifacts which pass the evaluation can decide to use a seal that indicates that the artifact has passed the repeatability test.
We recognize that not all the results are repeatable. For instance, the execution time of the experiments may be too long or a complete infrastructure to execute the tests may be required, but not available to the evaluators. We encourage submissions but we can only guarantee to repeat experiments that are reasonably repeatable with regular computing resources. Our focus is on (1) replicating the tests that are repeatable, (2) improving the repeatability infrastructure so that more tests become repeatable in the future.
Formatting instructions
Artifacts should include two components:
- a document explaining how to use the artifact and which of the experiments presented in the paper are repeatable (with reference to specific digits, figures and tables in the paper), the system requirements and instructions for installing and using the artifact;
- the software and any accompanying data.
A good how-to to prepare an artifact evaluation package is available online at http://bit.ly/HOWTO-AEC.
The evaluation process is single-blind. The evaluation process is non-competitive and we hope that all the artifacts submitted can pass the evaluation criteria.
Timeline
- Artifact submission deadline: April 11, 2019 (23:59 UTC-12)
- Response to authors: May 16th
During the first three weeks, the evaluators will try to execute the code and communicate with the authors via the chairs, in case of problems in executing the code. Then the evaluators will have additional time to evaluate the reproducibility of the results. The seal of approval will be added to the camera ready version of the paper by the organizers.
Submission process
To submit an artifact, use the easychair submission website:
https://easychair.org/conferences/?conf=ecrtsae19
and specify the following fields:
- Authors: same as for the (already accepted) paper,
- Title: same as for the (already accepted) paper,
- Abstract: same as for the (already accepted) paper,
- Keywords: use this field to specify the url at which we can find the information about the artifact and the artifact code (which should be hosted by authors), you should specify at least three keywords (among these there must be a file containing instructions for the evaluators, if not all three lines are needed fill three lines of keywords to enable the submission form), examples are:
- instructions: url of instruction file (for example a pdf file or a txt file),
- code: link to the code (for example a zip file or a repository),
- vm: link to a virtual machine image (if present),
- docker: link to a docker file (if present),
- Paper: include the PDF version of your paper (the instruction file should specify which results in the paper are reproducible, to give the artifact evaluators the possibility to check them).
For additional questions contact the Artifact Evaluation chairs.