## First Interdisciplinary Workshop on Algorithmic Challenges in Real-Time Systems

### Technische Universität Berlin February 27-29, 2012

This workshop aims at bringing together young researchers from different scheduling communities in computer science, mathematics and operations research and discussing common research interests. The focus of this workshop are broadly theory, algorithms and complexity for real-time systems which appears to be a fruitful field for interdisciplinary collaboration on important problems. The workshop program contains talks, open problem sessions, and discussion rounds on general future research developments of this area. The goal is to discuss relevant research directions, identify particular research questions and potential collaborations, and to establish a long-lasting research network.

This workshop is supported by the Mathematics Department at TU Berlin, and the DFG Research Center MATHEON.

Location: The workshop will take place in the BMS conference room at the second floor of the Mathematics Department at TU Berlin. Please see this page for travel information.

Organizing committee: Marko Bertogna (University of Modena, Italy), Nicole Megow (MPI für Informatik, Saarbrücken, and TU Darmstadt), Sebastian Stiller (TU Berlin).

Registration: Please send an e-mail to Sebastian. There is a registration fee of 40 Euro.

#### Confirmed speakers

Nikhil Bansal (TU Eindhoven)
Enrico Bini (Scuola Sant'Anna, Pisa)
Vincenzo Bonifaci (IASI-CNR Rome)
Jian-Jia Chen (KIT Karlsruhe)
Liliana Cucu-Grosjean (INRIA Nancy)
Rob Davis (University of York)
Monaldo Mastrolilli (IDSIA Manno-Lugano)
Ahlem Mifdaoui (ISAE Toulouse)
Thomas Nolte (Mälardalen University, Västerå)
Cyriel Rutten (Maastricht University)
Rene Sitters (Vrije Universiteit, Amsterdam)
Suzanne van der Ster (Vrije Universiteit, Amsterdam)
Andreas Wiese (La Sapienza, Rome)

#### Program

Monday (Feb. 27)

12:00-13:00 Welcome buffet
13:00-16:00 Introductory round and presentation of general research interests
16:00-16:30 Coffee
16:30-17:00 Marko Bertogna: Future funding opportunities, energy scheduling and multiprocessor systems [slides]
17:00-17:30 Vincenzo Bonifaci: Scheduling a Few Different Types of Unrelated Machines [slides]
17:30-18:00 Ahlem Mifdaoui: System level worst-case performance evaluation of embedded Networks [slides]

Tuesday (Feb. 28)

09:30-10:00 Jian-Jia Chen: Timing predictability in resource-sharing systems [slides]
10:00-10:30 Suzanne van der Ster: Mixed-criticality scheduling of implicit-deadline task systems on a single machine [slides]
10:30-11:00 Rob Davis: Optimal Priority Assignment Algorithms for Probabilistic Real-Time Systems [slides1, slides2]
11:00-11:30 Coffee
11:30-12:30 Collaboration and discussion
12:30-13:30 Lunch
13:30-15:30 Collaboration and discussion
15:30-16:00 Cyriel Rutten: Feasibility tests for sporadic task systems on unrelated parallel machines [slides]
16:00-16:30 Enrico Bini: Optimal assignment of periods and deadlines to EDF-scheduled tasks [slides]
16:30-17:00 Nikhil Bansal: The Geometry of Scheduling [slides]
17:00-18:00+ Discussion on future research topics

Wednesday (Feb. 29)

09:30-10:00 Rene Sitters: Efficient algorithms for average completion time scheduling [slides]
10:00-10:30 Thomas Nolte: Predictability and resource reservation [slides]
10:30-11:00 Coffee
11:00-11:30 Andreas Wiese: The periodic maintenance problem [slides]
11:00-12:00 Nicole Megow: Scheduling to meet deadlines: online algorithms and feasibility tests [slides]
12:00 Review and closing

#### Abstracts

Marko Bertogna: Future funding opportunities, Energy scheduling and Multiprocessor systems

The seventh Framework Programme for Research and Innovation of the European Community is coming to a conclusion, and will be replaced in 2014 by Horizon 2020, the new research funding programme that will allocate around 80 billion Euro through three broad lines of action. This talk will briefly show the remaining opportuitites for joint project proposals in the remaining calls of FP7, and the more promising research directions of Horizon 2020, with a particular focus on ICT topics that are of interest to our communities (real-time systems, embedded systems, algorithms, scheduling). The intention is to stimulate the discussion for establishing meaningful collaborations and connections for future joint project proposals. Two very generic examples are given for possible research directions: the management of real-time electrical loads, and the scheduling problem for massively-parallel multiprocessor platforms.

Vincenzo Bonifaci: Scheduling a Few Different Types of Unrelated Machines

We give a polynomial time approximation scheme (PTAS) for the problem of scheduling a set of jobs on a set of unrelated machines in the case that each machine belongs to one of a fixed number of types, and the processing time of each job depends only on the job and the type of the machine it is assigned to. This generalizes existing PTASs for identical machines and for a constant number of unrelated machines. The result more generally holds for multidimensional jobs with a fixed number of dimensions. One application is the assignment of sporadic implicit-deadline tasks to a set of heterogeneous processors in the presence of, e.g., memory constraints.

Ahlem Mifdaoui: System level worst-case performance evaluation of embedded Networks

With the increasing complexity of embedded networks and the expansion of exchanged data quantity, making the right design decisions to guarantee the systems requirements becomes a difficult task for designer. Hence, it becomes one of the major challenges in the design process to analyze the systems characteristics till the first steps of the development cycle. Simulations are commonly used for exploring the design space and validating the functional and timing behaviors of the embedded networks. However, these approaches are feasible for only small parts of the system and cannot cover the entire domain of the model applicability and especially rare events that represent worst-case functioning of the system. So, clearly, simulations cannot provide the deterministic guarantees required by critical embedded networks with hard certification requirements to respect like in civil and military avionics, automotive and satellites, where a failure might have a disastrous consequence on the system. With a formal specification language like StateCharts or Specification and Description Language (SDL), it is possible to verify the functional behavior of the network. However, for big networks with an important number of nodes, this approach leads to a space explosion problem inherent to the reachability analysis techniques implemented by the formal verification tools. In order to overcome these limitations, integrating an aided-decision tool till the first steps of the design process becomes necessary to choose the right systems parameters to respect the required constraints before investing too much time in detailed implementations. WOPANets (Worst Case Performance Analysis of embedded Networks) is a new tool that embodies an analytical approach based on the Network Calculus formalism to perform a system level worst-case performance evaluation and includes optimization analysis to find the optimal design solution that guarantees the system's requirements. This approach would enhance the design process time and costs. Some avionics and space networks are already implemented in WOPANets like AFDX and Spacewire and new models for applications based on multi-cluster embedded networks and wireless technologies are in progress.

Jian-Jia Chen: Timing predictability in resource-sharing systems

Designing systems with predictable timing behavior is an important task for safety-critical systems, such as automotive and avionic applications. This requires joint considerations from both software and hardware sides to certify the timing properties. The hardware should be timing predictable, i.e., timing accidents, including cache misses, pipeline stalls, branch mispredictions, bus collisions, memory refresh of DRAM, and TLB misses, should have bounded delay. The software should also be timing predictable from a single task's perspective (e.g., bounded loops, program depth, stack size, etc.) to multiple tasks' perspective (e.g., bounded number of preemptions, restarts, resource contentions, etc.).

Typically, when a predictable hardware platform is given, the timing behavior of a program may be analyzed approximately by using static analysis. The worst-case execution time (WCET) is then used for analyzing whether a set of programs can meet their timing constraints, i.e., the its response time should not exceed its relative deadline. This works well when uniprocessor systems or multiprocessor systems without hardware sharing are considered.

However, to reduce the cost and power consumption, resource sharing has been widely adopted in modern multicore system designs. That is, several cores have shared memory, shared communication fabric, etc. The drawback for resource sharing is mainly on the performance penalty. Therefore, how to share the resources will affect the timing behavior significantly. Timing anomalies usually happen when we consider resource sharing. Constructing tight analysis is usually non-trivial.

I would like to talk a little bit about what we have done for worst-case timing analysis when resource sharing is considered. We have tried to model the fundamental problem and solved it with over-approximation with high time/space complexity. Tighter and more efficient analysis are needed to make sure that resource sharing in multicore systems does not make the system timing unpredictable.

We consider scheduling an implicit-deadline sporadic task system on a single machine in a mixed-criticality setting. Each task of such a system generates a (possibly infinite) string of jobs, released one by one, with an interarrival separation bounded from below by a certain task-dependent period. Each job has a relative deadline equal to the length of that period.

In a mixed-criticality setting, each task has multiple levels of worst-case execution time estimates and each task has its own criticality level. By executing the tasks, we learn what level the system is in (which may change over time). When the system is in level $\ell$, all jobs of tasks of level larger or equal to $\ell$ should be scheduled for their level-$\ell$ execution requirement, to meet their deadline.

Mixed-criticality systems arise when multiple functionalities are scheduled upon a single shared computing platform. This can force tasks of different importance (i.e. criticality) to share a processor.

We give a scheduling algorithm for scheduling a mixed-criticality task system, called EDF-VD (Earliest Deadline First with Virtual Deadlines). We give sufficient conditions to check feasibility efficiently for $K$ levels. For 2 levels we also analyze the effectiveness of our scheduling algorithm using the processor speedup metric. We show that if a 2-level task system is schedulable on a unit-speed processor, then it is correctly scheduled by EDF-VD on a processor of speed 4/3.

Rob Davis: Optimal Priority Assignment Algorithms for Probabilistic Real-Time Systems

We present the problem of optimal priority assignment in fixed priority pre-emptive single pro-cessor systems where tasks have probabilistic execution times. We identify three sub-problems which optimize different metrics related to the probability of deadline failures.

Cyriel Rutten: Feasibility tests for sporadic task systems on unrelated parallel machines

In modern hardware architectures it has become a very common feature to contain several types of processors with possibly completely different characteristics. In (real-time) scheduling, this feature is modeled by assuming the machines of a system to be unrelated. We study the problem of assigning sporadic tasks to unrelated machines such that the tasks on each machine can be feasibly scheduled.

We develop a polynomial-time algorithm which approximates the problem with a constant speedup factor of $2\cdot(4 + \sqrt{6})=12.9$ when the number of machines is arbitrary. Further, we show that any polynomial-time algorithm needs a speedup factor of at least 2, unless P = NP. Also, if the number of machines is constant, we approximate the problem to a factor of $1+ \epsilon$.

In this talk, we will provide an overview of the techniques used to derive our results. Key is a better understanding of the demand bound function which yields a sufficient and necessary condition for a task system on a single machine to be feasible. In particular, we evaluate function only at a constant number of time moments. Additionally, we approximate the demand bound function of a task by the task's utilization whenever the task's deadline sufficiently smaller than the time moment at which the function is being evaluated. In case the number of machines is arbitrary, we continue with formulating a linear program for which an integer solution (implying a schedule) is derived using a result of Karp et al. (1987). On the other hand, if the number of machines is constant, we formulate a dynamic program.

In many application domains (such as in control systems), periods and deadlines are not known parameters. They are rather variables that need to be optimally assigned by the designer or (better) by a solver. For this purpose, suitable representations of the feasibility condition is needed. In the talk, the problem and some preliminary results are presented.

Nikhil Bansal: The Geometry of Scheduling

We consider the following general scheduling problem: There are~$n$ jobs, each with an arbitrary release time, size, and a monotone function specifying the cost incurred when the job is completed at a particular time. The objective is to find a preemptive schedule of minimum aggregate cost. This problem formulation is general enough to include many natural scheduling objectives in both traditional scheduling setting and real time scheduling.

Our main result is a randomized polynomial-time algorithm with an approximation ratio O(log log P), which substantially improves the previous results even for very special cases. The key insight is to relate this problem to certain natural geometric problems in 2d and 3d, and use recent techniques developed for geometric set cover problems with low union complexity.

Thomas Nolte: Predictability and resource reservation

Resource reservation techniques provide a way to guarantee a supply of resources from the resource to the user. Over the years several approaches have been developed, stretching from purely theoretical models (e.g., linear approximation) to models more reflecting the implementation (e.g., periodic resource model) and real implementations (e.g., hierarchical scheduling). However, these resource reservation techniques typically only cater for a particular kind of resource, such as the CPU, on a particular kind of computer architecture, e.g., a single CPU. We would like to take a broader scope of resource reservation. Multiple resources could be taken into consideration together to deal with effects such as a memory access affecting the time needed to execute a particular piece of software. Allocation of where and when a particular piece of software is executing will impact performance, hence also impact the price to be paid for achieving predictability. In order for resource reservation techniques to be practical, more care needs to be taken into the way, e.g., hardware is catered for. Here we believe in a combination of traditional resource reservation techniques used together with more advanced modeling and analysis techniques including statistics and probabilities.

Rene Sitters: Efficient algorithms for average completion time scheduling

We analyze the competitive ratio of online algorithms for minimizing (weighted) average completion time on identical parallel machines and prove that the well-known shortest remaining processing time algorithm (SRPT) is $5/4$-competitive w.r.t. the average completion time objective. The SRPT algorithm has a natural generalization to the case where jobs have given weights. Unfortunately, our proof does not carry over to this case. For weighted completion times, no algorithm is known to have a competitive ratio less than 2. Remarkably, even for the offline problem, the only ratio less than 2 results from the approximation scheme given by Afrati et al. We give a deterministic online algorithm with competitive ratio $1.791+o(m)$. This ratio holds for preemptive and non-preemptive scheduling. The two algorithms show that approximation ratios less than 2 can be obtained for parallel machines by simple and efficient online algorithms. The known lower bounds indicate that competitive ratios close to 1 are possible for randomized algorithms, especially when preemption is allowed.

Andreas Wiese: The periodic maintenance problem

The periodic maintenance problem is a real-time scheduling problem which arises in the avionics industry. A set of tasks has to be distributed on a minimum number of machines and offsets of the tasks have to be computed. The tasks emit jobs strictly periodically starting at their offset and then need to be executed on the machines without any delay. We present a thorough theoretical characterization of this problem in terms of approximation algorithms and complexity results. One important special case that we study is the case of harmonic period lengths, which very often arises in real-world instances. Then we present computational results for different IP-formulations of the problem. In particular, we present a formulation especially tailored to the harmonic case which clearly outperforms standard textbook approaches to model the problem.

Nicole Megow: Scheduling to meet deadlines: online algorithms and feasibility tests In this talk I will present recent results and open questions in the context of scheduling real-time jobs with hard deadlines on m identical parallel machines. Each job has a processing time and a deadline, and the objective is to schedule jobs so that they complete before their deadline. It is known that even when the instance is feasible it may not be possible to meet all deadlines when jobs arrive online over time. Therefore, we consider settings in which the online algorithm has additional resources, such as higher speed or more machines, than the optimal offline algorithm that knows all jobs in advance. We give a new online algorithm improving on previous speedup results. And we show how our new insights relate to demand bound functions for periodic task systems and lead to improved feasibility tests.