|Bibliography||Grau, Andreas: Runtime Minimization of Scalable Network Emulation. |
University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, Doctoral Thesis (2012).
169 pages, english.
Network emulation constitutes an approved methodology to evaluate the performance of distributed applications and communication protocols. The approach of network emulation models computer networks by connecting instances of the Software under Test (SuT), representing routers and hosts, using a distributed emulation tool. The emulation tool allows for specifying the parameters of these connections, like bandwidth, delay and loss rate. Therefore, network emulation combines the benefits of network simulation, like controllability and repeatability of network experiments, with the benefits of real world testbeds, like accuracy and realism of running the unmodified implementations of the SuT. Recently, researchers have spent much effort to increase the scalability of network emulation to allow for the evaluation of distributed systems in large network topologies with thousands of network nodes. Basically two concepts are introduced to reach that goal: node and time virtualization. Node virtualization allows for partitioning the physical nodes of an emulation testbed to run multiple instances of the SuT on each physical node. However, the available resources of the physical nodes limit the number of virtual nodes that can be executed by the physical nodes without overloading the hardware. This overload can be avoided by applying the concept of time virtualization. Executing the network experiment with a virtual time, that runs a factor slower than real time, allows for increasing the resources like CPU and network capacity of the testbed by the same factor. However, the runtime of network experiments is also increased by that factor. The goal of this work is to reduce the runtime of network experiments and, thus, increase the satisfaction of testbed users and testbed operators. Therefore, this thesis makes the following contributions. First, we present an efficient emulation architecture for testbeds with multi-core processors that provides node and time virtualization and minimizes CPU and memory consumption as well as the communication overhead. Second, we introduce the new concept of adaptive virtual time, that allows for dynamically adjusting the speed of the experiment to the resource requirements during the experiment runtime. Using this approach, the experiment can run with an increased speed during periods of low resource requirements. Third, we provide an accurate testbed model to capture the resource requirements of an experiment. Fourth, based on this model, we introduce an approach to calculate an initial placement of virtual nodes onto the physical nodes that minimizes the experiment runtime based on the experiment specification. Finally, in order to react on varying resource requirements during the experiment, we provide an approach to adapt the placement during the running experiment based on transparent migration of virtual nodes to further reduce the experiment runtime. The developed concepts are implemented and integrated in our Network Emulation Testbed (NET). Detailed evaluations of our prototype show the efficiency and effectiveness of our concepts to minimize the runtime of experiments based on network emulation.