Home | Legals | KIT

Refereed Conference/Workshop Papers

[1] Nikolaus Huber and Max Kramer. Design trade-offs in ibm storage virtualization. In Modeling and Simulating Software Architectures - The Palladio Approach, Ralf H. Reussner, Steffen Becker, Jens Happe, Robert Heinrich, Anne Koziolek, Heiko Koziolek, Max Kramer, and Klaus Krogmann, editors, chapter 13, pages 301-315. MIT Press, Cambridge, MA, October 2016. [ bib | http ]
[2] Fabian Brosig, Fabian Gorsler, Nikolaus Huber, and Samuel Kounev. Evaluating Approaches for Performance Prediction in Virtualized Environments (Short Paper). In Proceedings of the IEEE 21st International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS 2013), San Francisco, USA, August 14-16, 2013. [ bib | .pdf ]
[3] Nikolas Roman Herbst, Nikolaus Huber, Samuel Kounev, and Erich Amrehn. Self-Adaptive Workload Classification and Forecasting for Proactive Resource Provisioning. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE 2013), Prague, Czech Republic, April 21-24, 2013, pages 187-198. ACM, New York, NY, USA. April 2013. [ bib | DOI | slides | http | .pdf | Abstract ]
As modern enterprise software systems become increasingly dynamic, workload forecasting techniques are gaining in importance as a foundation for online capacity planning and resource management. Time series analysis covers a broad spectrum of methods to calculate workload forecasts based on history monitoring data. Related work in the field of workload forecasting mostly concentrates on evaluating specific methods and their individual optimisation potential or on predicting Quality-of-Service (QoS) metrics directly. As a basis, we present a survey on established forecasting methods of the time series analysis concerning their benefits and drawbacks and group them according to their computational overheads. In this paper, we propose a novel self-adaptive approach that selects suitable forecasting methods for a given context based on a decision tree and direct feedback cycles together with a corresponding implementation. The user needs to provide only his general forecasting objectives. In several experiments and case studies based on real world workload traces, we show that our implementation of the approach provides continuous and reliable forecast results at run-time. The results of this extensive evaluation show that the relative error of the individual forecast points is significantly reduced compared to statically applied forecasting methods, e.g. in an exemplary scenario on average by 37%. In a case study, between 55% and 75% of the violations of a given service level agreement can be prevented by applying proactive resource provisioning based on the forecast results of our implementation.
[4] Nikolaus Huber, André van Hoorn, Anne Koziolek, Fabian Brosig, and Samuel Kounev. S/T/A: Meta-Modeling Run-Time Adaptation in Component-Based System Architectures. In Proceedings of the 9th IEEE International Conference on e-Business Engineering (ICEBE 2012), Hangzhou, China, September 9-11, 2012, pages 70-77. IEEE Computer Society, Los Alamitos, CA, USA. September 2012, Acceptance Rate (Full Paper): 19.7% (26/132). [ bib | DOI | http | .pdf | Abstract ]
Modern virtualized system environments usually host diverse applications of different parties and aim at utilizing resources efficiently while ensuring that quality-of-service requirements are continuously satisfied. In such scenarios, complex adaptations to changes in the system environment are still largely performed manually by humans. Over the past decade, autonomic self-adaptation techniques aiming to minimize human intervention have become increasingly popular. However, given that adaptation processes are usually highly system specific, it is a challenge to abstract from system details enabling the reuse of adaptation strategies. In this paper, we propose a novel modeling language (meta-model) providing means to describe system adaptation processes at the system architecture level in a generic, human-understandable and reusable way. We apply our approach to three different realistic contexts (dynamic resource allocation, software architecture optimization, and run-time adaptation planning) showing how the gap between complex manual adaptations and their autonomous execution can be closed by using a holistic model-based approach.
[5] Fabian Brosig, Nikolaus Huber, and Samuel Kounev. Modeling Parameter and Context Dependencies in Online Architecture-Level Performance Models. In Proceedings of the 15th ACM SIGSOFT International Symposium on Component Based Software Engineering (CBSE 2012), June 26-28, 2012, Bertinoro, Italy, June 2012. Acceptance Rate (Full Paper): 28.5%. [ bib | http | .pdf | Abstract ]
Modern enterprise applications have to satisfy increasingly stringent Quality-of-Service requirements. To ensure that a system meets its performance requirements, the ability to predict its performance under different configurations and workloads is essential. Architecture-level performance models describe performance-relevant aspects of software architectures and execution environments allowing to evaluate different usage profiles as well as system deployment and configuration options. However, building performance models manually requires a lot of time and effort. In this paper, we present a novel automated method for the extraction of architecture-level performance models of distributed component-based systems, based on monitoring data collected at run-time. The method is validated in a case study with the industry-standard SPECjEnterprise2010 Enterprise Java benchmark, a representative software system executed in a realistic environment. The obtained performance predictions match the measurements on the real system within an error margin of mostly 10-20 percent.
[6] Nikolaus Huber, Fabian Brosig, and Samuel Kounev. Modeling Dynamic Virtualized Resource Landscapes. In Proceedings of the 8th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2012), Bertinoro, Italy, June 25-28, 2012, pages 81-90. ACM, New York, NY, USA. June 2012, Acceptance Rate (Full Paper): 25.6%. [ bib | DOI | http | .pdf | Abstract ]
Modern data centers are subject to an increasing demand for flexibility. Increased flexibility and dynamics, however, also result in a higher system complexity. This complexity carries on to run-time resource management for Quality-of-Service (QoS) enforcement, rendering design-time approaches for QoS assurance inadequate. In this paper, we present a set of novel meta-models that can be used to describe the resource landscape, the architecture and resource layers of dynamic virtualized data center infrastructures, as well as their run-time adaptation and resource management aspects. With these meta-models we introduce new modeling concepts to improve model-based run-time QoS assurance. We evaluate our meta-models by modeling a representative virtualized service infrastructure and using these model instances for run-time resource allocation. The results demonstrate the benefits of the new meta-models and show how they can be used to improve model-based system adaptation and run-time resource management in dynamic virtualized data centers.
[7] Nikolaus Huber, Fabian Brosig, N. Dingle, K. Joshi, and Samuel Kounev. Providing Dependability and Performance in the Cloud: Case Studies. In Resilience Assessment and Evaluation of Computing Systems, K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII. Springer-Verlag, Berlin, Heidelberg, 2012. ISBN: 978-3-642-29031-2. [ bib | http | .pdf ]
[8] Nikolaus Huber, Marcel von Quast, Fabian Brosig, Michael Hauck, and Samuel Kounev. A Method for Experimental Analysis and Modeling of Virtualization Performance Overhead. In Cloud Computing and Services Science, Ivan Ivanov, Marten van Sinderen, and Boris Shishkov, editors, Service Science: Research and Innovations in the Service Economy, pages 353-370. Springer, New York, 2012. [ bib | DOI | http | .pdf ]
[9] Samuel Kounev, Nikolaus Huber, Simon Spinner, and Fabian Brosig. Model-based techniques for performance engineering of business information systems. In Business Modeling and Software Design, Boris Shishkov, editor, volume 0109 of Lecture Notes in Business Information Processing (LNBIP), pages 19-37. Springer-Verlag, Berlin, Heidelberg, 2012. [ bib | http | .pdf | Abstract ]
With the increasing adoption of virtualization and the transition towards Cloud Computing platforms, modern business information systems are becoming increasingly complex and dynamic. This raises the challenge of guaranteeing system performance and scalability while at the same time ensuring efficient resource usage. In this paper, we present a historical perspective on the evolution of model-based performance engineering techniques for business information systems focusing on the major developments over the past several decades that have shaped the field. We survey the state-of-the-art on performance modeling and management approaches discussing the ongoing efforts in the community to increasingly bridge the gap between high-level business services and low level performance models. Finally, we wrap up with an outlook on the emergence of self-aware systems engineering as a new research area at the intersection of several computer science disciplines.
[10] Fabian Brosig, Nikolaus Huber, and Samuel Kounev. Automated Extraction of Architecture-Level Performance Models of Distributed Component-Based Systems. In 26th IEEE/ACM International Conference On Automated Software Engineering (ASE 2011), November 2011. Oread, Lawrence, Kansas. Acceptance Rate (Full Paper): 14.7% (37/252). [ bib | .pdf | Abstract ]
Modern service-oriented enterprise systems have increasingly complex and dynamic loosely-coupled architectures that often exhibit poor performance and resource efficiency and have high operating costs. This is due to the inability to predict at run-time the effect of dynamic changes in the system environment and adapt the system configuration accordingly. Architecture-level performance models provide a powerful tool for performance prediction, however, current approaches to modeling the execution context of software components are not suitable for use at run-time. In this paper, we analyze the typical online performance prediction scenarios and propose a novel performance meta-model for expressing and resolving parameter and context dependencies, specifically designed for use in online scenarios. We motivate and validate our approach in the context of a realistic and representative online performance prediction scenario based on the SPECjEnterprise2010 standard benchmark.
[11] Michael Hauck, Michael Kuperberg, Nikolaus Huber, and Ralf Reussner. Ginpex: Deriving Performance-relevant Infrastructure Properties Through Goal-oriented Experiments. In Proceedings of the 7th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2011), June 20-24, 2011, pages 53-62. ACM, New York, NY, USA. June 2011. [ bib | DOI | www: | .pdf ]
[12] Samuel Kounev, Fabian Brosig, and Nikolaus Huber. Self-Aware QoS Management in Virtualized Infrastructures (Poster Paper). In 8th International Conference on Autonomic Computing (ICAC 2011), Karlsruhe, Germany, June 14-18, 2011. [ bib | .pdf | Abstract ]
We present an overview of our work-in-progress and long-term research agenda aiming to develop a novel methodology for engineering of self-aware software systems. The latter will have built-in architecture-level QoS models enhanced to capture dynamic aspects of the system environment and maintained automatically during operation. The models will be exploited at run-time to adapt the system to changes in the environment ensuring that resources are utilized efficiently and QoS requirements are satisfied.
[13] Nikolaus Huber, Fabian Brosig, and Samuel Kounev. Model-based Self-Adaptive Resource Allocation in Virtualized Environments. In 6th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2011), Waikiki, Honolulu, HI, USA, May 23-24, 2011, pages 90-99. ACM, New York, NY, USA. May 2011, Acceptance Rate (Full Paper): 27% (21/76). [ bib | DOI | http | .pdf | Abstract ]
The adoption of virtualization and Cloud Computing technologies promises a number of benefits such as increased flexibility, better energy efficiency and lower operating costs for IT systems. However, highly variable workloads make it challenging to provide quality-of-service guarantees while at the same time ensuring efficient resource utilization. To avoid violations of service-level agreements (SLAs) or inefficient resource usage, resource allocations have to be adapted continuously during operation to reflect changes in application workloads. In this paper, we present a novel approach to self-adaptive resource allocation in virtualized environments based on online architecture-level performance models. We present a detailed case study of a representative enterprise application, the new SPECjEnterprise2010 benchmark, deployed in a virtualized cluster environment. The case study serves as a proof-of-concept demonstrating the effectiveness and practical applicability of our approach.
[14] Nikolaus Huber, Marcel von Quast, Michael Hauck, and Samuel Kounev. Evaluating and Modeling Virtualization Performance Overhead for Cloud Environments. In Proceedings of the 1st International Conference on Cloud Computing and Services Science (CLOSER 2011), Noordwijkerhout, The Netherlands, May 7-9, 2011, pages 563 - 573. SciTePress. May 2011, Acceptance Rate: 18/164 = 10.9%, Best Paper Award. [ bib | http | .pdf | Abstract ]
Due to trends like Cloud Computing and Green IT, virtualization technologies are gaining increasing importance. They promise energy and cost savings by sharing physical resources, thus making resource usage more efficient. However, resource sharing and other factors have direct effects on system performance, which are not yet well-understood. Hence, performance prediction and performance management of services deployed in virtualized environments like public and private Clouds is a challenging task. Because of the large variety of virtualization solutions, a generic approach to predict the performance overhead of services running on virtualization platforms is highly desirable. In this paper, we present experimental results on two popular state-of-the-art virtualization platforms, Citrix XenServer 5.5 and VMware ESX 4.0, as representatives of the two major hypervisor architectures. Based on these results, we propose a basic, generic performance prediction model for the two different types of hypervisor architectures. The target is to predict the performance overhead for executing services on virtualized platforms.
[15] Samuel Kounev, Konstantin Bender, Fabian Brosig, Nikolaus Huber, and Russell Okamoto. Automated Simulation-Based Capacity Planning for Enterprise Data Fabrics. In 4th International ICST Conference on Simulation Tools and Techniques, Barcelona, Spain, March 21-25, 2011, pages 27-36. ICST, Brussels, Belgium, Belgium. March 2011, Acceptance Rate (Full Paper): 29.8% (23/77), ICST Best Paper Award. [ bib | slides | .pdf | Abstract ]
Enterprise data fabrics are gaining increasing attention in many industry domains including financial services, telecommunications, transportation and health care. Providing a distributed, operational data platform sitting between application infrastructures and back-end data sources, enterprise data fabrics are designed for high performance and scalability. However, given the dynamics of modern applications, system sizing and capacity planning need to be done continuously during operation to ensure adequate quality-of-service and efficient resource utilization. While most products are shipped with performance monitoring and analysis tools, such tools are typically focused on low-level profiling and they lack support for performance prediction and capacity planning. In this paper, we present a novel case study of a representative enterprise data fabric, the GemFire EDF, presenting a simulation-based tool that we have developed for automated performance prediction and capacity planning. The tool, called Jewel, automates resource demand estimation, performance model generation, performance model analysis and results processing. We present an experimental evaluation of the tool demonstrating its effctiveness and practical applicability.
[16] Nikolaus Huber, Marcel von Quast, Fabian Brosig, and Samuel Kounev. Analysis of the Performance-Influencing Factors of Virtualization Platforms. In The 12th International Symposium on Distributed Objects, Middleware, and Applications (DOA 2010), Crete, Greece, October 26, 2010. Springer Verlag, Crete, Greece. October 2010, Acceptance Rate (Full Paper): 33%. [ bib | .pdf | Abstract ]
Nowadays, virtualization solutions are gaining increasing importance. By enabling the sharing of physical resources, thus making resource usage more efficient, they promise energy and cost savings. Additionally, virtualization is the key enabling technology for Cloud Computing and server consolidation. However, the effects of sharing resources on system performance are not yet well-understood. This makes performance prediction and performance management of services deployed in such dynamic systems very challenging. Because of the large variety of virtualization solutions, a generic approach to predict the performance influences of virtualization platforms is highly desirable. In this paper, we present a hierarchical model capturing the major performance-relevant factors of virtualization platforms. We then propose a general methodology to quantify the influence of the identified factors based on an empirical approach using benchmarks. Finally, we present a case study of Citrix XenServer 5.5, a state-of-the-art virtualization platform.
[17] Samuel Kounev, Fabian Brosig, Nikolaus Huber, and Ralf Reussner. Towards self-aware performance and resource management in modern service-oriented systems. In Proceedings of the 7th IEEE International Conference on Services Computing (SCC 2010), July 5-10, Miami, Florida, USA, Miami, Florida, USA, July 5-10, 2010. IEEE Computer Society. July 2010. [ bib | .pdf | Abstract ]
Modern service-oriented systems have increasingly complex loosely-coupled architectures that often exhibit poor performance and resource efficiency and have high operating costs. This is due to the inability to predict at run-time the effect of dynamic changes in the system environment (e.g., varying service workloads) and adapt the system configuration accordingly. In this paper, we describe a long-term vision and approach for designing systems with built-in self-aware performance and resource management capabilities. We advocate the use of architecture-level performance models extracted dynamically from the evolving system configuration and maintained automatically during operation. The models will be exploited at run-time to adapt the system to changes in the environment ensuring that resources are utilized efficiently and performance requirements are continuously satisfied.
[18] Nikolaus Huber, Steffen Becker, Christoph Rathfelder, Jochen Schweflinghaus, and Ralf Reussner. Performance Modeling in Industry: A Case Study on Storage Virtualization. In ACM/IEEE 32nd International Conference on Software Engineering (ICSE 2010), Software Engineering in Practice Track, Cape Town, South Africa, May 2-8, 2010, pages 1-10. ACM, New York, NY, USA. May 2010, Acceptance Rate (Full Paper): 23% (16/71). [ bib | DOI | slides | .pdf | Abstract ]
In software engineering, performance and the integration of performance analysis methodologies gain increasing importance, especially for complex systems. Well-developed methods and tools can predict non-functional performance properties like response time or resource utilization in early design stages, thus promising time and cost savings. However, as performance modeling and performance prediction is still a young research area, the methods are not yet well-established and in wide-spread industrial use. This work is a case study of the applicability of the Palladio Component Model as a performance prediction method in an industrial environment. We model and analyze different design alternatives for storage virtualization on an IBM (Trademark of IBM in USA and/or other countries) system. The model calibration, validation and evaluation is based on data measured on a System z9 (Trademark of IBM in USA and/or other countries) as a proof of concept. The results show that performance predictions can identify performance bottlenecks and evaluate design alternatives in early stages of system development. The experiences gained were that performance modeling helps to understand and analyze a system. Hence, this case study substantiates that performance modeling is applicable in industry and a valuable method for evaluating design decisions.

Refereed Journal Articles

[1] Nikolas Roman Herbst, Nikolaus Huber, Samuel Kounev, and Erich Amrehn. Self-Adaptive Workload Classification and Forecasting for Proactive Resource Provisioning. Concurrency and Computation - Practice and Experience, Special Issue with extended versions of the best papers from ICPE 2013, John Wiley and Sons, Ltd., 2014. [ bib | DOI | http | Abstract ]
As modern enterprise software systems become increasingly dynamic, workload forecasting techniques are gaining in importance as a foundation for online capacity planning and resource management. Time series analysis covers a broad spectrum of methods to calculate workload forecasts based on history monitoring data. Related work in the field of workload forecasting mostly concentrates on evaluating specific methods and their individual optimisation potential or on predicting Quality-of-Service (QoS) metrics directly. As a basis, we present a survey on established forecasting methods of the time series analysis concerning their benefits and drawbacks and group them according to their computational overheads. In this paper, we propose a novel self-adaptive approach that selects suitable forecasting methods for a given context based on a decision tree and direct feedback cycles together with a corresponding implementation. The user needs to provide only his general forecasting objectives. In several experiments and case studies based on real world workload traces, we show that our implementation of the approach provides continuous and reliable forecast results at run-time. The results of this extensive evaluation show that the relative error of the individual forecast points is significantly reduced compared to statically applied forecasting methods, e.g. in an exemplary scenario on average by 37%. In a case study, between 55% and 75% of the violations of a given service level agreement can be prevented by applying proactive resource provisioning based on the forecast results of our implementation.
[2] Fabian Brosig, Nikolaus Huber, and Samuel Kounev. Architecture-Level Software Performance Abstractions for Online Performance Prediction. Elsevier Science of Computer Programming Journal (SciCo), 90, Part B:71 - 92, 2014, Elsevier. [ bib | DOI | http | .pdf ]
[3] Nikolaus Huber, André van Hoorn, Anne Koziolek, Fabian Brosig, and Samuel Kounev. Modeling Run-Time Adaptation at the System Architecture Level in Dynamic Service-Oriented Environments. Service Oriented Computing and Applications Journal (SOCA), 8(1):73-89, 2014, Springer London. [ bib | DOI | .pdf ]
[4] Michael Hauck, Michael Kuperberg, Nikolaus Huber, and Ralf Reussner. Deriving performance-relevant infrastructure properties through model-based experiments with ginpex. Software & Systems Modeling, pages 1-21, 2013, Springer-Verlag. [ bib | DOI | http | Abstract ]
To predict the performance of an application, it is crucial to consider the performance of the underlying infrastructure. Thus, to yield accurate prediction results, performance-relevant properties and behaviour of the infrastructure have to be integrated into performance models. However, capturing these properties is a cumbersome and error-prone task, as it requires carefully engineered measurements and experiments. Existing approaches for creating infrastructure performance models require manual coding of these experiments, or ignore the detailed properties in the models. The contribution of this paper is the Ginpex approach, which introduces goal-oriented and model-based specification and generation of executable performance experiments for automatically detecting and quantifying performance-relevant infrastructure properties. Ginpex provides a metamodel for experiment specification and comes with predefined experiment templates that provide automated experiment execution on the target platform and also automate the evaluation of the experiment results. We evaluate Ginpex using three case studies, where experiments are executed to quantify various infrastructure properties.
[5] Nigel Thomas, Jeremy Bradley, William Knottenbelt, Samuel Kounev, Nikolaus Huber, and Fabian Brosig. Preface. Electronic Notes in Theoretical Computer Science, 275:1 - 3, 2011, Elsevier Science Publishers B. V., Amsterdam, The Netherlands. [ bib | DOI ]

Theses

[1] Nikolaus Huber. Performance Modeling of Storage Virtualization. Master's thesis, Universität Karlsruhe (TH), Karlsruhe, Germany, April 2009. GFFT Prize. [ bib ]