{"id":32,"date":"2021-11-05T16:04:16","date_gmt":"2021-11-05T16:04:16","guid":{"rendered":"http:\/\/v2202107152796158410.ultrasrv.de\/?page_id=32"},"modified":"2024-04-21T08:37:58","modified_gmt":"2024-04-21T08:37:58","slug":"artifact-evaluation","status":"publish","type":"page","link":"https:\/\/www.ecrts.org\/artifact-evaluation\/","title":{"rendered":"Artifact evaluation"},"content":{"rendered":"
Motivation<\/h5>\n

Empirical evidence is important to produce long-lasting impact research. We feel that the tools and experiments that are used to produce or validate research results are not given as much attention as they should. To counteract this tendency,\u00a0 Artifact Evaluation<\/b> (AE) rewards well written tools allowing researchers to replicate the experiments presented in papers. The purpose of the AE process is mainly to improve the reproducibility of computational results.<\/p>\n

ECRTS has been the first real-time systems conference to introduce artifact evaluation<\/b> in 2016, and has continued since then.\u00a0<\/p>\n

Authors of accepted papers<\/strong> with a computational component will be invited to submit<\/b> their code and\/or their data to an optional AE process. We seek to achieve the benefits of the AE process without disturbing the current process through which ECRTS has generated high-quality programs in the past. In particular, the decision to submit or not an artifact has no impact on whether a paper is accepted at ECRTS. Moreover, there will be\u00a0no disclosure of the title and authors of papers which would not pass the repeatability evaluation<\/strong>.<\/p>\n

The authors of the papers corresponding to the artifacts which pass the evaluation can decide to use a\u00a0seal that indicates that the artifact has passed the repeatability test, <\/strong>and the artifact will be published in Dagstuhl Artifacts Series<\/a> (DARTS).<\/p>\n

We recognize that not all the results are repeatable. For instance, the execution time of the experiments may be too long or a complete infrastructure to execute the tests may be required, but not be available to the evaluators. We encourage submissions but we can only guarantee to repeat experiments that are reasonably repeatable with regular computing resources. Our focus is on: (1) replicating the tests that are repeatable; (2) improving the repeatability infrastructure so that more tests become repeatable in the future.<\/p>\n

Formatting instructions<\/h5>\n

Artifacts should include two components:<\/p>\n