Work-in-Progress: Evaluation Framework for Self-Suspending Schedulability Tests

Abstract

Numerical simulations often play an important role when evaluating and comparing the performance of schedulability tests, as they allow to empirically demonstrate their applicability using synthesized task sets under various configurations. In order to provide a fair comparison of various schedulability tests, von der Brüggen et al. presented the first version of an evaluation framework for self-suspending task sets. In this work-in-progress, we further enhance the framework by providing more features to ease the use, e.g., Python 3 support, an improved GUI, multiprocessing, Gurobi optimization, and external task evaluation. In addition, we integrate the state-of-the-arts we are aware of into the framework. Moreover, the documentation is improved significantly to simplify the application in further research and development. To the best of our knowledge, the framework contains all suspension-aware schedulability tests for uniprocessor systems and we aim to keep it up-to-date.

Publication
IEEE Real-Time Systems Symposium (RTSS)