Motivation
Empirical evidence is important to produce long-lasting impact research. We feel that the tools and experiments that are used to produce or validate research results are not given as much attention as they should. To counteract this tendency, Artifact Evaluation (AE) rewards well written tools allowing researchers to replicate the experiments presented in papers. The purpose of the AE process is mainly to improve the reproducibility of computational results.
ECRTS has been the first real-time systems conference to introduce artifact evaluation in 2016, and has continued since then.
Authors of accepted papers with a computational component will be invited to submit their code and/or their data to an optional AE process. We seek to achieve the benefits of the AE process without disturbing the current process through which ECRTS has generated high-quality programs in the past. In particular, the decision to submit or not an artifact has no impact on whether a paper is accepted at ECRTS. Moreover, there will be no disclosure of the title and authors of papers which would not pass the repeatability evaluation.
The authors of the papers corresponding to the artifacts which pass the evaluation can decide to use a seal that indicates that the artifact has passed the repeatability test, and the artifact will be published in Dagstuhl Artifacts Series (DARTS).
We recognize that not all the results are repeatable. For instance, the execution time of the experiments may be too long or a complete infrastructure to execute the tests may be required, but not be available to the evaluators. We encourage submissions but we can only guarantee to repeat experiments that are reasonably repeatable with regular computing resources. Our focus is on: (1) replicating the tests that are repeatable; (2) improving the repeatability infrastructure so that more tests become repeatable in the future.
Formatting instructions
Artifacts should include two components:
- a document explaining how to use the artifact and which of the experiments presented in the paper are repeatable (with reference to specific digits, figures and tables in the paper), the system requirements and instructions for installing and using the artifact;
- the software and any accompanying data.
A good how-to to prepare an artifact evaluation package is available at http://bit.ly/HOWTO-AEC.
The evaluation process is single-blind. It is non-competitive and we hope that all the artifacts submitted can pass the evaluation criteria.
Special Artifacts
If you are not in a position to prepare the artifact as above, or if your artifact requires special libraries, commercial tools (e.g., MATLAB or specific toolboxes), or particular hardware, please contact the AE chair as soon as possible.
Recommendation: Use Virtual Machines
Based on previous experience, the biggest hurdle to successful reproducibility is the setup and installation of the necessary libraries and dependencies. Authors are therefore encouraged to prepare a virtual machine (VM) image including their artifact (if possible) and to make it available via HTTP throughout the evaluation process (and, ideally, afterwards). As the basis of the VM image, please choose commonly-used OS versions that have been tested with the virtual machine software and that evaluators are likely to be accustomed to. We encourage authors to use VirtualBox (https://www.virtualbox.org) and save the VM image as an Open Virtual Appliance (OVA) file. To facilitate the preparation of the VM, we suggest using the VM images available
at https://www.osboxes.org/.
Timeline
- Abstract Submission (Platform Dependencies): Monday, April 29, 2024
- Artifact Evaluation Submission Deadline: Monday, May 6, 2024
- Author Notification: Thursday, May 23, 2024
- Submission to DARTS (final version for artifacts that passed the evaluation): tba
Submission Process
Given that your paper has been accepted, we highly encourage you to submit to the ECRTS’23 Artifact Evaluation (AE). The artifact submission deadline is Monday, May 6, 2024, with an abstract due Monday, April 29, 2024.
Authors of artifacts that pass the evaluation will be asked to submit the final artifact version to Dagstuhl Artifacts Series (DARTS). The deadline is tba.
Submission Site: https://easychair.org/conferences/?conf=ecrtsae24
Organizers
Artifact Evaluation co-chairs:
-
Matthias Becker
KTH Royal Institute of Technology SE -
Catherine Nemitz
Davidson College, North Carolina, USA
Evaluation committee:
- Tanya Amert, Carleton University, United States of America
- Jatin Arora, CISTER, ISEP, Polytechnic Institute of Porto, Portugal
- Joshua Bakita, University of North Carolina at Chapel Hill, United States of America
- Daniel Casini, Scuola Superiore Sant’Anna – Pisa, Italy
- Kuan-Hsun Chen, University of Twente, Netherlands
- Xiaotian Dai, University of York, United Kingdom
- Zheng Dong, Wayne State University, United States of America
- Brian Donyanavard, San Diego State University, United States of America
- Anna Friebe, Mälardalen University, Sweden
- Mario Günzel, TU Dortmund University, Germany
- Seonyeong Heo, Kyung Hee University, South Korea
- Sims Osborne, Elon University, United States of America
- Luigi Pannoci, Scuola Superiore Sant’Anna – Pisa, Italy
- Romaric Pegdwende Nikiema, University of Rennes/INRIA, France
- Marion Sudvarg, Washington University in St. Louis, United States of America
- Corey Tessler, University of Nevada Las Vegas, United States of America