Home | Legals | KIT

NOTE: This page is outdated and not maintained any more. You can find our new homepage at

http://se.informatik.uni-wuerzburg.de/

Publications 2014

[1] Samuel Kounev, Fabian Brosig, and Nikolaus Huber. The Descartes Modeling Language. Technical report, Department of Computer Science, University of Wuerzburg, October 2014. [ bib | http | http | .pdf | Abstract ]
This technical report introduces the Descartes Modeling Language (DML), a new architecture-level modeling language for modeling Quality-of-Service (QoS) and resource management related aspects of modern dynamic IT systems, infrastructures and services. DML is designed to serve as a basis for self-aware resource management during operation ensuring that system QoS requirements are continuously satisfied while infrastructure resources are utilized as efficiently as possible.
[2] Andreas Rentschler, Dominik Werle, Qais Noorshams, Lucia Happe, and Ralf Reussner. Remodularizing Legacy Model Transformations with Automatic Clustering Techniques. In Proceedings of the 3rd Workshop on the Analysis of Model Transformations co-located with the 17th International Conference on Model Driven Engineering Languages and Systems (AMT@MODELS '14), Valencia, Spain, September 29, 2014, Benoit Baudry, Jürgen Dingel, Levi Lucio, and Hans Vangheluwe, editors, October 2014, volume 1277 of CEUR Workshop Proceedings, pages 4-13. CEUR-WS.org. October 2014. [ bib | http | .pdf ]
[3] Rouven Krebs, Simon Spinner, Nadia Ahmed, and Samuel Kounev. Resource Usage Control In Multi-Tenant Applications. In Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014), Chicago, IL, USA, May 26, 2014. IEEE/ACM. May 2014, Accepted for Publication. [ bib | .pdf | Abstract ]
Multi-tenancy is an approach to share one application instance among multiple customers by providing each of them a dedicated view. This approach is commonly used by SaaS providers to reduce the costs for service provisioning. Tenants also expect to be isolated in terms of the performance they observe and the providers inability to offer performance guarantees is a major obstacle for potential cloud customers. To guarantee an isolated performance it is essential to control the resources used by a tenant. This is a challenge, because the layers of the execution environment, responsible for controlling resource usage (e.g., operating system), normally do not have knowledge about entities defined at the application level and thus they cannot distinguish between different tenants. Furthermore, it is hard to predict how tenant requests propagate through the multiple layers of the execution environment down to the physical resource layer. The intended abstraction of the application from the resource controlling layers does not allow to solely solving this problem in the application. In this paper, we propose an approach which applies resource demand estimation techniques in combination with a request based admission control. The resource demand estimation is used to determine resource consumption information for individual requests. The admission control mechanism uses this knowledge to delay requests originating from tenants that exceed their allocated resource share. The proposed method is validated by a widely accepted benchmark showing its applicability in a setup motivated by today's platform environments.
[4] Andreas Rentschler, Dominik Werle, Qais Noorshams, Lucia Happe, and Ralf Reussner. Designing Information Hiding Modularity for Model Transformation Languages. In Proceedings of the 13th International Conference on Modularity (AOSD '14), Lugano, Switzerland, April 22 - 26, 2014, April 2014, pages 217-228. ACM, New York, NY, USA. April 2014, Acceptance Rate: 35.0%. [ bib | DOI | http | .pdf ]
[5] Rouven Krebs and Manuel Loesch. Comparison of Request Admission Based Performance Isolation Approaches in Multi-Tenant SaaS Applications. In Proceedings of 4th International Conference On Cloud Computing And Services Science (CLOSER 2014), Barcelona, Spain, April 3, 2014. SciTePress. April 2014, Short Paper. [ bib | .pdf | Abstract ]
Multi-tenancy is an approach to share one application instance among multiple customers by providing each of them a dedicated view. This approach is commonly used by SaaS providers to reduce the costs for service provisioning. Tenants also expect to be isolated in terms of the performance they observe and the providers inability to offer performance guarantees is a major obstacle for potential cloud customers. To guarantee an isolated performance it is essential to control the resources used by a tenant. This is a challenge, because the layers of the execution environment, responsible for controlling resource usage (e.g., operating system), normally do not have knowledge about entities defined at the application level and thus they cannot distinguish between different tenants. Furthermore, it is hard to predict how tenant requests propagate through the multiple layers of the execution environment down to the physical resource layer. The intended abstraction of the application from the resource controlling layers does not allow to solely solving this problem in the application. In this paper, we propose an approach which applies resource demand estimation techniques in combination with a request based admission control. The resource demand estimation is used to determine resource consumption information for individual requests. The admission control mechanism uses this knowledge to delay requests originating from tenants that exceed their allocated resource share. The proposed method is validated by a widely accepted benchmark showing its applicability in a setup motivated by today's platform environments.
[6] Jóakim Gunnarson von Kistowski, Nikolas Roman Herbst, and Samuel Kounev. Modeling Variations in Load Intensity over Time. In Proceedings of the 3rd International Workshop on Large-Scale Testing (LT 2014), co-located with the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, March 22, 2014, pages 1-4. ACM, New York, NY, USA. March 2014. [ bib | DOI | slides | http | .pdf | Abstract ]
Today's software systems are expected to deliver reliable performance under highly variable load intensities while at the same time making efficient use of dynamically allocated resources. Conventional benchmarking frameworks provide limited support for emulating such highly variable and dynamic load profiles and workload scenarios. Industrial benchmarks typically use workloads with constant or stepwise increasing load intensity, or they simply replay recorded workload traces. Based on this observation, we identify the need for means allowing flexible definition of load profiles and address this by introducing two meta-models at different abstraction levels. At the lower abstraction level, the Descartes Load Intensity Meta-Model (DLIM) offers a structured and accessible way of describing the load intensity over time by editing and combining mathematical functions. The High-Level Descartes Load Intensity Meta-Model (HLDLIM) allows the description of load variations using few defined parameters that characterize the seasonal patterns, trends, bursts and noise parts. We demonstrate that both meta-models are capable of capturing real-world load profiles with acceptable accuracy through comparison with a real life trace.
[7] Jóakim Gunnarson von Kistowski, Nikolas Roman Herbst, and Samuel Kounev. LIMBO: A Tool For Modeling Variable Load Intensities (Demonstration Paper). In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, March 22-26, 2014, ICPE '14, pages 225-226. ACM, New York, NY, USA. March 2014. [ bib | DOI | slides | http | .pdf | Abstract ]
Modern software systems are expected to deliver reliable performance under highly variable load intensities while at the same time making efficient use of dynamically allocated resources. Conventional benchmarking frameworks provide limited support for emulating such highly variable and dynamic load profiles and workload scenarios. Industrial benchmarks typically use workloads with constant or stepwise increasing load intensity, or they simply replay recorded workload traces. In this paper, we present LIMBO - an Eclipse-based tool for modeling variable load intensity profiles based on the Descartes Load Intensity Model as an underlying modeling formalism.
[8] Andreas Weber, Nikolas Roman Herbst, Henning Groenda, and Samuel Kounev. Towards a Resource Elasticity Benchmark for Cloud Environments. In Proceedings of the 2nd International Workshop on Hot Topics in Cloud Service Scalability (HotTopiCS 2014), co-located with the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, March 22, 2014. ACM. March 2014. [ bib | slides | .pdf | Abstract ]
Auto-scaling features offered by today's cloud infrastructures provide increased flexibility especially for customers that experience high variations in the load intensity over time. However, auto-scaling features introduce new system quality attributes when considering their accuracy, timing, and boundaries. Therefore, distinguishing between different offerings has become a complex task, as it is not yet supported by reliable metrics and measurement approaches. In this paper, we discuss shortcomings of existing approaches for measuring and evaluating elastic behavior and propose a novel benchmark methodology specifically designed for evaluating the elasticity aspects of modern cloud platforms. The benchmark is based on open workloads with realistic load variation profiles that are calibrated to induce identical resource demand variations independent of the underlying hardware performance. Furthermore, we propose new metrics that capture the accuracy of resource allocations and de-allocations, as well as the timing aspects of an auto-scaling mechanism explicitly.
[9] Simon Spinner, Giuliano Casale, Xiaoyun Zhu, and Samuel Kounev. LibReDE: A Library for Resource Demand Estimation (Demonstration Paper). In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, March 22-26, 2014. ACM. March 2014, Accepted for Publication. [ bib | Abstract ]
When creating a performance model, it is necessary to quantify the amount of resources consumed by an application serving individual requests. In distributed enterprise systems, these resource demands usually cannot be observed directly, their estimation is a major challenge. Different statistical approaches to resource demand estimation based on monitoring data have been proposed, e.g., using linear regression or Kalman filtering techniques. In this paper, we present LibReDE, a library of ready-to-use implementations of approaches to resource demand estimation that can be used for online and offline analysis. It is the first publicly available tool for this task and aims at supporting performance engineers during performance model construction. The library enables the quick comparison of the estimation accuracy of different approaches in a given context and thus helps to select an optimal one.
[10] Rouven Krebs, Philipp Schneider, and Nikolas Herbst. Optimization Method for Request Admission Control to Guarantee Performance Isolation. In Proceedings of the 2nd International Workshop on Hot Topics in Cloud Service Scalability (HotTopiCS 2014), co-located with the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, March 22, 2014. ACM. March 2014. [ bib | slides | .pdf | Abstract ]
Software-as-a-Service (SaaS) often shares one single application instance among different tenants to reduce costs. However, sharing potentially leads to undesired influence from one tenant onto the performance observed by the others. Furthermore, providing one tenant additional resources to support its increasing demands without increasing the performance of tenants who do not pay for it is a major challenge. The application intentionally does not manage hardware resources, and the OS is not aware of application level entities like tenants. Thus, it is difficult to control the performance of different tenants to keep them isolated. These problems gain importance as performance is one of the major obstacles for cloud customers. Existing work applies request based admission control mechanisms like a weighted round robin with an individual queue for each tenant to control the share guaranteed for a tenant. However, the computation of the concrete weights for such an admission control is still challenging. In this paper, we present a fitness function and optimization approach reflecting various requirements from this field to compute proper weights with the goal to ensure an isolated performance as foundation to scale on a tenants basis.
[11] Nikolas Roman Herbst, Nikolaus Huber, Samuel Kounev, and Erich Amrehn. Self-Adaptive Workload Classification and Forecasting for Proactive Resource Provisioning. Concurrency and Computation - Practice and Experience, Special Issue with extended versions of the best papers from ICPE 2013, John Wiley and Sons, Ltd., 2014. [ bib | DOI | http | Abstract ]
As modern enterprise software systems become increasingly dynamic, workload forecasting techniques are gaining in importance as a foundation for online capacity planning and resource management. Time series analysis covers a broad spectrum of methods to calculate workload forecasts based on history monitoring data. Related work in the field of workload forecasting mostly concentrates on evaluating specific methods and their individual optimisation potential or on predicting Quality-of-Service (QoS) metrics directly. As a basis, we present a survey on established forecasting methods of the time series analysis concerning their benefits and drawbacks and group them according to their computational overheads. In this paper, we propose a novel self-adaptive approach that selects suitable forecasting methods for a given context based on a decision tree and direct feedback cycles together with a corresponding implementation. The user needs to provide only his general forecasting objectives. In several experiments and case studies based on real world workload traces, we show that our implementation of the approach provides continuous and reliable forecast results at run-time. The results of this extensive evaluation show that the relative error of the individual forecast points is significantly reduced compared to statically applied forecasting methods, e.g. in an exemplary scenario on average by 37%. In a case study, between 55% and 75% of the violations of a given service level agreement can be prevented by applying proactive resource provisioning based on the forecast results of our implementation.
[12] Steffen Becker, Wilhelm Hasselbring, Andre van Hoorn, Samuel Kounev, Ralf Reussner, et al. Proceedings of the 2014 symposium on software performance (sosp'14): Joint descartes/kieker/palladio days. 2014, Stuttgart, Germany, Universität Stuttgart. [ bib ]
[13] Tomás Martinec, Lukás Marek, Antonín Steinhauser, Petr Tůma, Qais Noorshams, Andreas Rentschler, and Ralf Reussner. Constructing performance model of jms middleware platform. In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering, Dublin, Ireland, 2014, ICPE '14, pages 123-134. ACM, New York, NY, USA. 2014. [ bib | DOI | http ]
[14] Qais Noorshams, Kiana Rostami, Samuel Kounev, and Ralf Reussner. Modeling of I/O Performance Interference in Virtualized Environments with Queueing Petri Nets. In Proceedings of the IEEE 22nd International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, France, Paris, 2014, MASCOTS '14. [ bib | .pdf ]
[15] Qais Noorshams, Roland Reeb, Andreas Rentschler, Samuel Kounev, and Ralf Reussner. Enriching software architecture models with statistical models for performance prediction in modern storage environments. In Proceedings of the 17th International ACM Sigsoft Symposium on Component-based Software Engineering, Marcq-en-Bareul, France, 2014, CBSE '14, pages 45-54. ACM, New York, NY, USA. 2014, Acceptance Rate (Full Paper): 14/62 = 23%. [ bib | DOI | http | .pdf ]
[16] Qais Noorshams, Axel Busch, Andreas Rentschler, Dominik Bruhn, Samuel Kounev, Petr Tůma, and Ralf Reussner. Automated Modeling of I/O Performance and Interference Effects in Virtualized Storage Systems. In 34th IEEE International Conference on Distributed Computing Systems Workshops (ICDCS 2014 Workshops). 4th International Workshop on Data Center Performance, DCPerf '14, Madrid, Spain, 2014, pages 88-93. [ bib | DOI | http | .pdf ]
[17] Rouven Krebs, Christof Momm, and Samuel Kounev. Metrics and Techniques for Quantifying Performance Isolation in Cloud Environments. Elsevier Science of Computer Programming Journal (SciCo), 90, Part B:116 - 134, 2014, Elsevier B.V. Special Issue on Component-Based Software Engineering and Software Architecture. [ bib | DOI | http | .pdf | Abstract ]
The cloud computing paradigm enables the provision of cost efficient IT-services by leveraging economies of scale and sharing data center resources efficiently among multiple independent applications and customers. However, the sharing of resources leads to possible interference between users and performance problems are one of the major obstacles for potential cloud customers. Consequently, it is one of the primary goals of cloud service providers to have different customers and their hosted applications isolated as much as possible in terms of the performance they observe. To make different offerings, comparable with regards to their performance isolation capabilities, a representative metric is needed to quantify the level of performance isolation in cloud environments. Such a metric should allow to measure externally by running benchmarks from the outside treating the cloud as a black box. In this article, we propose three different types of novel metrics for quantifying the performance isolation of cloud-based systems. We consider four new approaches to achieve performance isolation in Software-as-a-Service (SaaS) offerings and evaluate them based on the proposed metrics as part of a simulation-based case study. To demonstrate the effectiveness and practical applicability of the proposed metrics for quantifying the performance isolation in various scenarios, we present a second case study evaluating performance isolation of the hypervisor Xen.
[18] Fabian Brosig, Nikolaus Huber, and Samuel Kounev. Architecture-Level Software Performance Abstractions for Online Performance Prediction. Elsevier Science of Computer Programming Journal (SciCo), 90, Part B:71 - 92, 2014, Elsevier. [ bib | DOI | http | .pdf ]
[19] Nikolaus Huber, André van Hoorn, Anne Koziolek, Fabian Brosig, and Samuel Kounev. Modeling Run-Time Adaptation at the System Architecture Level in Dynamic Service-Oriented Environments. Service Oriented Computing and Applications Journal (SOCA), 8(1):73-89, 2014, Springer London. [ bib | DOI | .pdf ]
[20] Fabian Gorsler, Fabian Brosig, and Samuel Kounev. Performance queries for architecture-level performance models. In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, 2014. ACM, New York, NY, USA. 2014, Accepted for publication. Acceptance Rate (Full Paper): 29%. [ bib ]
[21] Piotr Rygielski and Samuel Kounev. Data Center Network Throughput Analysis using Queueing Petri Nets. In 34th IEEE International Conference on Distributed Computing Systems Workshops (ICDCS 2014 Wokrshops). 4th International Workshop on Data Center Performance, (DCPerf 2014), Madrid, Spain, 2014. (Paper accepted for publication). [ bib ]

 TOP

Publications 2013

[1] Fabian Gorsler, Fabian Brosig, and Samuel Kounev. Controlling the palladio bench using the descartes query language. In Proceedings of the Symposium on Software Performance: Joint Kieker/Palladio Days (KPDAYS 2013), Steffen Becker, Wilhelm Hasselbring, André van Hoorn, and Ralf Reussner, editors, November 2013, number 1083, pages 109-118. CEUR-WS.org, Aachen, Germany. November 2013. [ bib | http | .pdf | Abstract ]
The Palladio Bench is a tool to model, simulate and analyze Palladio Component Model (PCM) instances. However, for the Palladio Bench, no single interface to automate experiments or Application Programming Interface (API) to trigger the simulation of PCM instances and to extract performance prediction results is available. The Descartes Query Language (DQL) is a novel approach of a declarative query language to integrate different performance modeling and prediction techniques behind a unifying interface. Users benefit from the abstraction of specific tools to prepare and trigger performance predictions, less effort to obtain performance metrics of interest, and means to automate performance predictions. In this paper, we describe the realization of a DQL Connector for PCM and demonstrate the applicability of our approach in a case study.
[2] Piotr Rygielski, Samuel Kounev, and Steffen Zschaler. Model-Based Throughput Prediction in Data Center Networks. In Proceedings of the 2nd IEEE International Workshop on Measurements and Networking (M&N 2013), Naples, Italy, October 7-8, 2013, pages 167-172. [ bib | .pdf ]
[3] Jürgen Walter. Parallel Simulation of Queueing Petri Net Models. Diploma thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, October 2013. [ bib | .pdf | Abstract ]
For years the CPU clock frequency was the key to improve processor performance. Nowadays, modern processors enable performance improvements by increasing the number of cores. However, existing software needs to be adapted to be able to utilize multiple cores. Such an adaptation poses many challenges in the field of discrete-event software simulation. Decades of intensive research have been spent to find a general solution for parallel discrete event simulation. In this context, QNs and PNs have been extensively studied. However, to the best of our knowledge, there is only one previous work that considers the concurrent simulation of QPN [Juergens1997]. This work focuses on comparing different synchronization algorithms and excludes a majority of lookahead calculation and net decomposition. In this thesis, we build upon and extend this work. For this purpose, we adapted and extended findings from QNs, PNs and parallel simulation in general. We apply our findings to SimQPN, which is a sequential simulation engine for QPN. Among other application areas, SimQPN is currently applied to online performance prediction for which a speedup due to parallelization is desirable. We present a parallel SimQPN implementation that employs application level and event level parallelism. A validation ensures the functional correctness of the new parallel implementations. The parallelization of multiple runs enables almost linear speedup. We parallelized the execution of a single run by the use of a conservative barrier-based synchronization algorithm. The speedup for a single run depends on the capability of the model. Hence, a number of experiments on different net characteristics were conducted showing that for certain models a superlinear speedup is possible.
[4] Fabian Brosig, Fabian Gorsler, Nikolaus Huber, and Samuel Kounev. Evaluating Approaches for Performance Prediction in Virtualized Environments (Short Paper). In Proceedings of the IEEE 21st International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS 2013), San Francisco, USA, August 14-16, 2013. [ bib | .pdf ]
[5] Rouven Krebs, Alexander Wert, and Samuel Kounev. Multi-Tenancy Performance Benchmark for Web Application Platforms industrial track. In Proceedings of the 13th International Conference on Web Engineering (ICWE 2013), Aalborg, Denmark, July 8-12, 2013. Aalborg University, Denmark, Springer-Verlag. July 2013. [ bib | .pdf ]
[6] Fabian Gorsler. Online Performance Queries for Architecture-Level Performance Models. Master's thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, July 2013. [ bib | .pdf ]
[7] Nikolas Roman Herbst, Samuel Kounev, and Ralf Reussner. Elasticity in Cloud Computing: What it is, and What it is Not (Short Paper). In Proceedings of the 10th International Conference on Autonomic Computing (ICAC 2013), San Jose, CA, June 24-28, 2013. USENIX. June 2013, Acceptance Rate (Short Paper): 36.9%. [ bib | slides | http | .pdf | Abstract ]
Originating from the field of physics and economics, the term elasticity is nowadays heavily used in the context of cloud computing. In this context, elasticity is commonly understood as the ability of a system to automatically provision and de-provision computing resources on demand as workloads change. However, elasticity still lacks a precise definition as well as representative metrics coupled with a benchmarking methodology to enable comparability of systems. Existing definitions of elasticity are largely inconsistent and unspecific leading to confusion in the use of the term and its differentiation from related terms such as scalability and efficiency; the proposed measurement methodologies do not provide means to quantify elasticity without mixing it with efficiency or scalability aspects. In this short paper, we propose a precise definition of elasticity and analyze its core properties and requirements explicitly distinguishing from related terms such as scalability, efficiency, and agility. Furthermore, we present a set of appropriate elasticity metrics and sketch a new elasticity tailored benchmarking methodology addressing the special requirements on workload design and calibration.
[8] Aleksandar Milenkoski, Samuel Kounev, Alberto Avritzer, Nuno Antunes, and Marco Vieira. On Benchmarking Intrusion Detection Systems in Virtualized Environments. Technical Report SPEC-RG-2013-002 v.1.0, SPEC Research Group - IDS Benchmarking Working Group, Standard Performance Evaluation Corporation (SPEC), 7001 Heritage Village Plaza Suite 225, Gainesville, VA 20155, June 2013. [ bib | .pdf | Abstract ]
Modern intrusion detection systems (IDSes) for virtualized environments are deployed in the virtualization layer with components inside the virtual machine monitor (VMM) and the trusted host virtual machine (VM). Such IDSes can monitor at the same time the network and host activities of all guest VMs running on top of a VMM being isolated from malicious users of these VMs. We refer to IDSes for virtualized environments as VMM-based IDSes. In this work, we analyze state-of-the-art intrusion detection techniques applied in virtualized environments and architectures of VMM-based IDSes. Further, we identify challenges that apply specifically to benchmarking VMM-based IDSes focussing on workloads and metrics. For example, we discuss the challenge of de ning representative baseline benign workload profiles as well as the challenge of de ning malicious workloads containing attacks targeted at the VMM. We also discuss the impact of on-demand resource provisioning features of virtualized environments (e.g., CPU and memory hotplugging, memory ballooning) on IDS benchmarking measures such as capacity and attack detection accuracy. Finally, we outline future research directions in the area of benchmarking VMM-based IDSes and of intrusion detection in virtualized environments in general.
[9] Andreas Rentschler, Qais Noorshams, Lucia Happe, and Ralf Reussner. Interactive Visual Analytics for Efficient Maintenance of Model Transformations. In Proceedings of the 6th International Conference on Model Transformation (ICMT '13), Budapest, Hungary, Keith Duddy and Gerti Kappel, editors, June 2013, volume 7909 of Lecture Notes in Computer Science, pages 141-157. Springer-Verlag Berlin Heidelberg. June 2013, Acceptance Rate: 20.7%. [ bib | DOI | http | .pdf ]
[10] Samuel Kounev, Kai Sachs, and Piotr Rygielski. SPEC Research Group Newsletter, vol. 1 no. 2, June 2013. Published by Standard Performance Evaluation Corporation (SPEC). [ bib | .html | .pdf ]
[11] Samuel Kounev, Christoph Rathfelder, and Benjamin Klatt. Modeling of Event-based Communication in Component-based Architectures: State-of-the-Art and Future Directions. Electronic Notes in Theoretical Computer Science (ENTCS), 295:3-9, May 2013, Elsevier Science Publishers B. V., Amsterdam, The Netherlands. [ bib | slides | http | .pdf | Abstract ]
Event-based communication is used in different domains including telecommunications, transportation, and business information systems to build scalable distributed systems. Such systems typically have stringent requirements for performance and scalability as they provide business and mission critical services. While the use of event-based communication enables loosely-coupled interactions between components and leads to improved system scalability, it makes it much harder for developers to estimate the system's behavior and performance under load due to the decoupling of components and control flow. We present an overview on our approach enabling the modeling and performance prediction of event-based system at the architecture level. Applying a model-to-model transformation, our approach integrates platform-specific performance influences of the underlying middleware while enabling the use of different existing analytical and simulation-based prediction techniques. The results of two real world case studies demonstrate the effectiveness, practicability and accuracy of the proposed modeling and prediction approach.
[12] Manuel Loesch and Rouven Krebs. Conceptual Approach for Performance Isolation in Multi-Tenant Systems short paper. In Proceedings of the 3rd International Conference on Cloud Computing and Service Science (CLOSER 2013), Aachen, Germany, May 8-10, 2013. RWTH Aachen, Germany, SciTePress. May 2013. [ bib | .pdf ]
[13] Nikolas Roman Herbst, Nikolaus Huber, Samuel Kounev, and Erich Amrehn. Self-Adaptive Workload Classification and Forecasting for Proactive Resource Provisioning. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE 2013), Prague, Czech Republic, April 21-24, 2013, pages 187-198. ACM, New York, NY, USA. April 2013. [ bib | DOI | slides | http | .pdf | Abstract ]
As modern enterprise software systems become increasingly dynamic, workload forecasting techniques are gaining in importance as a foundation for online capacity planning and resource management. Time series analysis covers a broad spectrum of methods to calculate workload forecasts based on history monitoring data. Related work in the field of workload forecasting mostly concentrates on evaluating specific methods and their individual optimisation potential or on predicting Quality-of-Service (QoS) metrics directly. As a basis, we present a survey on established forecasting methods of the time series analysis concerning their benefits and drawbacks and group them according to their computational overheads. In this paper, we propose a novel self-adaptive approach that selects suitable forecasting methods for a given context based on a decision tree and direct feedback cycles together with a corresponding implementation. The user needs to provide only his general forecasting objectives. In several experiments and case studies based on real world workload traces, we show that our implementation of the approach provides continuous and reliable forecast results at run-time. The results of this extensive evaluation show that the relative error of the individual forecast points is significantly reduced compared to statically applied forecasting methods, e.g. in an exemplary scenario on average by 37%. In a case study, between 55% and 75% of the violations of a given service level agreement can be prevented by applying proactive resource provisioning based on the forecast results of our implementation.
[14] Samuel Kounev, Stamatia Rizou, Steffen Zschaler, Spiros Alexakis, Tomas Bures, Jean-Marc Jézéquel, Dimitrios Kourtesis, and Stelios Pantelopoulos. RELATE: A Research Training Network on Engineering and Provisioning of Service-Based Cloud Applications. In International Workshop on Hot Topics in Cloud Services (HotTopiCS 2013), Prague, Czech Republic, April 20-21, 2013. [ bib ]
[15] Aleksandar Milenkoski, Alexandru Iosup, Samuel Kounev, Kai Sachs, Piotr Rygielski, Jason Ding, Walfredo Cirne, and Florian Rosenberg. Cloud Usage Patterns: A Formalism for Description of Cloud Usage Scenarios. Technical Report SPEC-RG-2013-001 v.1.0.1, SPEC Research Group - Cloud Working Group, Standard Performance Evaluation Corporation (SPEC), 7001 Heritage Village Plaza Suite 225, Gainesville, VA 20155, April 2013. [ bib | .pdf | Abstract ]
Cloud computing is becoming an increasingly lucrative branch of the existing information and communication technologies (ICT). Enabling a debate about cloud usage scenarios can help with attracting new customers, sharing best-practices, and designing new cloud services. In contrast to previous approaches, which have attempted mainly to formalize the common service delivery models (i.e., Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service), in this work, we propose a formalism for describing common cloud usage scenarios referred to as cloud usage patterns. Our formalism takes a structuralist approach allowing decomposition of a cloud usage scenario into elements corresponding to the common cloud service delivery models. Furthermore, our formalism considers several cloud usage patterns that have recently emerged, such as hybrid services and value chains in which mediators are involved, also referred to as value chains with mediators. We propose a simple yet expressive textual and visual language for our formalism, and we show how it can be used in practice for describing a variety of real-world cloud usage scenarios. The scenarios for which we demonstrate our formalism include resource provisioning of global providers of infrastructure and/or platform resources, online social networking services, user-data processing services, online customer and ticketing services, online asset management and banking applications, CRM (Customer Relationship Management) applications, and online social gaming applications.
[16] Piotr Rygielski, Steffen Zschaler, and Samuel Kounev. A Meta-Model for Performance Modeling of Dynamic Virtualized Network Infrastructures (Work-In-Progress Paper). In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE 2013), Prague, Czech Republic, April 21-24, 2013, pages 327-330. ACM, New York, NY, USA. April 2013, Work-In-Progress Paper. [ bib | http | .pdf ]
[17] Samuel Kounev, Steffen Zschaler, and Kai Sachs, editors. Proceedings of the 2013 International Workshop on Hot Topics in Cloud Services (HotTopiCS 2013). ACM, April 2013. [ bib ]
[18] Christoph Rathfelder, Benjamin Klatt, Kai Sachs, and Samuel Kounev. Modeling event-based communication in component-based software architectures for performance predictions. Software and Systems Modeling, 13(4):1291-1317, March 2013, Springer Verlag. [ bib | DOI | http | .pdf | Abstract ]
Event-based communication is used in different domains including telecommunications, transportation, and business information systems to build scalable distributed systems. Such systems typically have stringent requirements for performance and scalability as they provide business and mission critical services. While the use of event-based communication enables loosely-coupled interactions between components and leads to improved system scalability, it makes it much harder for developers to estimate the system's behavior and performance under load due to the decoupling of components and control flow. In this paper, we present our approach enabling the modeling and performance prediction of event-based systems at the architecture level. Applying a model-to-model transformation, our approach integrates platform-specific performance influences of the underlying middleware while enabling the use of different existing analytical and simulation-based prediction techniques. In summary, the contributions of this paper are: (1) the development of a meta-model for event-based communication at the architecture level, (2) a platform aware model-to-model transformation, and (3) a detailed evaluation of the applicability of our approach based on two representative real-world case studies. The results demonstrate the effectiveness, practicability and accuracy of the proposed modeling and prediction approach.
[19] Piotr Rygielski and Samuel Kounev. Network Virtualization for QoS-Aware Resource Management in Cloud Data Centers: A Survey. PIK - Praxis der Informationsverarbeitung und Kommunikation, 36(1):55-64, February 2013, de Gruyter. [ bib | DOI | http | .pdf ]
[20] Simon Spinner, Samuel Kounev, Xiaoyun Zhu, and Mustafa Uysal. Towards Online Performance Model Extraction in Virtualized Environments (position paper). In Proceedings of the 8th Workshop on Models @ Run.time (MRT 2013), Nelly Bencomo, Robert France, Sebastian Götz, and Bernhard Rumpe, editors, Miami, Florida, USA, 2013, pages 89-95. CEUR-WS. 2013. [ bib | .pdf | Abstract ]
Virtualization increases the complexity and dynamics of modern software architectures making it a major challenge to manage the end-to-end performance of applications. Architecture-level performance models can help here as they provide the modeling power and analysis fexibility to predict the performance behavior of applications under varying workloads and configurations. However, the construction of such models is a complex and time-consuming task. In this position paper, we discuss how the existing concept of virtual appliances can be extended to automate the extraction of architecture-level performance models during system operation.
[21] Michael Hauck, Michael Kuperberg, Nikolaus Huber, and Ralf Reussner. Deriving performance-relevant infrastructure properties through model-based experiments with ginpex. Software & Systems Modeling, pages 1-21, 2013, Springer-Verlag. [ bib | DOI | http | Abstract ]
To predict the performance of an application, it is crucial to consider the performance of the underlying infrastructure. Thus, to yield accurate prediction results, performance-relevant properties and behaviour of the infrastructure have to be integrated into performance models. However, capturing these properties is a cumbersome and error-prone task, as it requires carefully engineered measurements and experiments. Existing approaches for creating infrastructure performance models require manual coding of these experiments, or ignore the detailed properties in the models. The contribution of this paper is the Ginpex approach, which introduces goal-oriented and model-based specification and generation of executable performance experiments for automatically detecting and quantifying performance-relevant infrastructure properties. Ginpex provides a metamodel for experiment specification and comes with predefined experiment templates that provide automated experiment execution on the target platform and also automate the evaluation of the experiment results. We evaluate Ginpex using three case studies, where experiments are executed to quantify various infrastructure properties.
[22] Aleksandar Milenkoski, Bryan D. Payne, Nuno Antunes, Marco Vieira, and Samuel Kounev. HInjector: Injecting Hypercall Attacks for Evaluating VMI-based Intrusion Detection Systems (poster paper). In The 2013 Annual Computer Security Applications Conference (ACSAC 2013), New Orleans, Louisiana, USA, 2013. Applied Computer Security Associates (ACSA), Maryland, USA. 2013. [ bib | .pdf ]
[23] Qais Noorshams, Kiana Rostami, Samuel Kounev, Petr Tůma, and Ralf Reussner. I/O Performance Modeling of Virtualized Storage Systems. In Proceedings of the IEEE 21st International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, San Francisco, USA, 2013, MASCOTS '13, pages 121-130. Acceptance Rate (Full Paper): 44/163 = 27%. [ bib | DOI | http | .pdf ]
[24] Qais Noorshams, Dominik Bruhn, Samuel Kounev, and Ralf Reussner. Predictive Performance Modeling of Virtualized Storage Systems using Optimized Statistical Regression Techniques. In Proceedings of the ACM/SPEC International Conference on Performance Engineering, Prague, Czech Republic, 2013, ICPE '13, pages 283-294. ACM, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[25] Qais Noorshams, Andreas Rentschler, Samuel Kounev, and Ralf Reussner. A Generic Approach for Architecture-level Performance Modeling and Prediction of Virtualized Storage Systems. In Proceedings of the ACM/SPEC International Conference on Performance Engineering, Prague, Czech Republic, 2013, ICPE '13, pages 339-342. ACM, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[26] Qais Noorshams, Samuel Kounev, and Ralf Reussner. Experimental Evaluation of the Performance-Influencing Factors of Virtualized Storage Systems. In Computer Performance Engineering. 9th European Workshop, EPEW 2012, Munich, Germany, July 30, 2012, and 28th UK Workshop, UKPEW 2012, Edinburgh, UK, July 2, 2012, Revised Selected Papers, Mirco Tribastone and Stephen Gilmore, editors, volume 7587 of Lecture Notes in Computer Science, pages 63-79. Springer Berlin Heidelberg, 2013. [ bib | DOI | http | .pdf ]
[27] Christoph Rathfelder. Modelling Event-Based Interactions in Component-Based Architectures for Quantitative System Evaluation, volume 10 of The Karlsruhe Series on Software Design and Quality. KIT Scientific Publishing, Karlsruhe, Germany, 2013. [ bib | http | http ]
[28] Robert Vaupel, Qais Noorshams, Samuel Kounev, and Ralf Reussner. Using Queuing Models for Large System Migration Scenarios - An Industrial Case Study with IBM System z. In Computer Performance Engineering. 10th European Workshop, EPEW 2013, Venice, Italy, September 16-17, 2013. Proceedings, Maria Simonetta Balsamo, William J. Knottenbelt, and Andrea Marin, editors, volume 8168 of Lecture Notes in Computer Science, pages 263-275. Springer Berlin Heidelberg, 2013. [ bib | DOI | http | .pdf ]
[29] Rouven Krebs, Manuel Loesch, and Samuel Kounev. Performance Isolation Framework for Multi-Tenant Applications. In Proceedings of the 3rd IEEE International Conference on Cloud and Green Computing (CGC 2013), Karlsruhe, Germany, 2013. [ bib ]
[30] Seyed Vahid Mohammadi, Samuel Kounev, Adrián Juan-Verdejo, and Bholanathsingh Surajbali. Soft Reservations: Uncertainty-Aware Resource Reservations in IaaS Environments. In Proceedings of the 3rd International Symposium on Business Modeling and Software Design (BMSD 2013), Noordwijkerhout, The Netherlands, 2013. [ bib | .pdf ]

 TOP

Publications 2012

[1] Aleksandar Milenkoski and Samuel Kounev. Towards Benchmarking Intrusion Detection Systems for Virtualized Cloud Environments (extended abstract). In Proceedings of the 7th International Conference for Internet Technology and Secured Transactions (ICITST 2012), London, United Kingdom, December 2012, pages 562-563. IEEE, New York, USA. December 2012. [ bib | http | .pdf | Abstract ]
Many recent research works propose novel architectures of intrusion detection systems specifically designed to operate in virtualized environments. However, little attention has been given to the evaluation and benchmarking of such architectures with respect to their performance and dependability. In this paper, we present a research roadmap towards developing a framework for benchmarking intrusion detection systems for cloud environments in a scientifically rigorous and a representative manner.
[2] Nikolaus Huber, André van Hoorn, Anne Koziolek, Fabian Brosig, and Samuel Kounev. S/T/A: Meta-Modeling Run-Time Adaptation in Component-Based System Architectures. In Proceedings of the 9th IEEE International Conference on e-Business Engineering (ICEBE 2012), Hangzhou, China, September 9-11, 2012, pages 70-77. IEEE Computer Society, Los Alamitos, CA, USA. September 2012, Acceptance Rate (Full Paper): 19.7% (26/132). [ bib | DOI | http | .pdf | Abstract ]
Modern virtualized system environments usually host diverse applications of different parties and aim at utilizing resources efficiently while ensuring that quality-of-service requirements are continuously satisfied. In such scenarios, complex adaptations to changes in the system environment are still largely performed manually by humans. Over the past decade, autonomic self-adaptation techniques aiming to minimize human intervention have become increasingly popular. However, given that adaptation processes are usually highly system specific, it is a challenge to abstract from system details enabling the reuse of adaptation strategies. In this paper, we propose a novel modeling language (meta-model) providing means to describe system adaptation processes at the system architecture level in a generic, human-understandable and reusable way. We apply our approach to three different realistic contexts (dynamic resource allocation, software architecture optimization, and run-time adaptation planning) showing how the gap between complex manual adaptations and their autonomous execution can be closed by using a holistic model-based approach.
[3] Dennis Westermann, Jens Happe, Rouven Krebs, and Roozbeh Farahbod. Automated inference of goal-oriented performance prediction functions. In Proceedings of the 27th IEEE/ACM International Conference On Automated Software Engineering (ASE 2012), Essen, Germany, September 3-7, 2012. [ bib ]
[4] Samuel Kounev, Kai Sachs, and Piotr Rygielski. SPEC Research Group Newsletter, vol. 1 no. 1, September 2012. Published by Standard Performance Evaluation Corporation (SPEC). [ bib | .html | .pdf ]
[5] Christoph Rathfelder, Stefan Becker, Klaus Krogmann, and Ralf Reussner. Workload-aware system monitoring using performance predictions applied to a large-scale e-mail system. In Proceedings of the Joint 10th Working IEEE/IFIP Conference on Software Architecture (WICSA) & 6th European Conference on Software Architecture (ECSA), Helsinki, Finland, August 2012, pages 31-40. IEEE Computer Society, Washington, DC, USA. August 2012, Acceptance Rate (Full Paper): 19.8%. [ bib | DOI | http | .pdf ]
[6] Fabian Brosig, Nikolaus Huber, and Samuel Kounev. Modeling Parameter and Context Dependencies in Online Architecture-Level Performance Models. In Proceedings of the 15th ACM SIGSOFT International Symposium on Component Based Software Engineering (CBSE 2012), June 26-28, 2012, Bertinoro, Italy, June 2012. Acceptance Rate (Full Paper): 28.5%. [ bib | http | .pdf | Abstract ]
Modern enterprise applications have to satisfy increasingly stringent Quality-of-Service requirements. To ensure that a system meets its performance requirements, the ability to predict its performance under different configurations and workloads is essential. Architecture-level performance models describe performance-relevant aspects of software architectures and execution environments allowing to evaluate different usage profiles as well as system deployment and configuration options. However, building performance models manually requires a lot of time and effort. In this paper, we present a novel automated method for the extraction of architecture-level performance models of distributed component-based systems, based on monitoring data collected at run-time. The method is validated in a case study with the industry-standard SPECjEnterprise2010 Enterprise Java benchmark, a representative software system executed in a realistic environment. The obtained performance predictions match the measurements on the real system within an error margin of mostly 10-20 percent.
[7] Nikolaus Huber, Fabian Brosig, and Samuel Kounev. Modeling Dynamic Virtualized Resource Landscapes. In Proceedings of the 8th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2012), Bertinoro, Italy, June 25-28, 2012, pages 81-90. ACM, New York, NY, USA. June 2012, Acceptance Rate (Full Paper): 25.6%. [ bib | DOI | http | .pdf | Abstract ]
Modern data centers are subject to an increasing demand for flexibility. Increased flexibility and dynamics, however, also result in a higher system complexity. This complexity carries on to run-time resource management for Quality-of-Service (QoS) enforcement, rendering design-time approaches for QoS assurance inadequate. In this paper, we present a set of novel meta-models that can be used to describe the resource landscape, the architecture and resource layers of dynamic virtualized data center infrastructures, as well as their run-time adaptation and resource management aspects. With these meta-models we introduce new modeling concepts to improve model-based run-time QoS assurance. We evaluate our meta-models by modeling a representative virtualized service infrastructure and using these model instances for run-time resource allocation. The results demonstrate the benefits of the new meta-models and show how they can be used to improve model-based system adaptation and run-time resource management in dynamic virtualized data centers.
[8] Rouven Krebs, Christof Momm, and Samuel Kounev. Metrics and Techniques for Quantifying Performance Isolation in Cloud Environments. In Proceedings of the 8th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2012), Barbora Buhnova and Antonio Vallecillo, editors, Bertinoro, Italy, June 25-28, 2012, pages 91-100. ACM Press, New York, USA. June 2012, Acceptance Rate (Full Paper): 25.6%. [ bib | http | .pdf ]
[9] Simon Spinner, Samuel Kounev, and Philipp Meier. Stochastic Modeling and Analysis using QPME: Queueing Petri Net Modeling Environment v2.0. In Proceedings of the 33rd International Conference on Application and Theory of Petri Nets and Concurrency (Petri Nets 2012), Serge Haddad and Lucia Pomello, editors, Hamburg, Germany, June 27-29, 2012, volume 7347 of Lecture Notes in Computer Science (LNCS), pages 388-397. Springer-Verlag, Berlin, Heidelberg. June 2012. [ bib | http | .pdf | Abstract ]
Queueing Petri nets are a powerful formalism that can be exploited for modeling distributed systems and analyzing their performance and scalability. By combining the modeling power and expressiveness of queueing networks and stochastic Petri nets, queueing Petri nets provide a number of advantages. In this paper, we present our tool QPME (Queueing Petri net Modeling Environment) for modeling and analysis using queueing Petri nets. QPME provides an Eclipse-based editor for building queueing Petri net models and a powerful simulation engine for analyzing these models. The development of the tool started in 2003 and since then the tool has been distributed to more than 120 organizations worldwide.
[10] Michael Faber and Jens Happe. Systematic adoption of genetic programming for deriving software performance curves. In Proceedings of 3rd ACM/SPEC Internatioanl Conference on Performance Engineering (ICPE 2012), Boston, USA, April 22-25, 2012, pages 33-44. ACM, New York, NY, USA. April 2012. [ bib | http | .pdf ]
[11] Samuel Kounev, Simon Spinner, and Philipp Meier. Introduction to Queueing Petri Nets: Modeling Formalism, Tool Support and Case Studies (tutorial paper). In Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering (ICPE 2012), Boston, USA, April 22-25, 2012, pages 9-18. ACM,SPEC, ACM, New York, NY, USA. April 2012. [ bib | slides | http | .pdf ]
[12] Rouven Krebs, Christof Momm, and Samuel Kounev. Architectural Concerns in Multi-Tenant SaaS Applications (short paper). In Proceedings of the 2nd International Conference on Cloud Computing and Services Science (CLOSER 2012), Setubal, Portugal, April 18-21, 2012. SciTePress. April 2012. [ bib | .pdf ]
[13] Kai Sachs, Samuel Kounev, and Alejandro Buchmann. Performance modeling and analysis of message-oriented event-driven systems. Journal of Software and Systems Modeling (SoSyM), pages 1-25, February 2012, Springer-Verlag. [ bib | DOI | http | .pdf ]
[14] Daniel Funke, Fabian Brosig, and Michael Faber. Towards Truthful Resource Reservation in Cloud Computing. In Proceedings of the 6th International ICST Conference on Performance Evaluation Methodologies and Tools (ValueTools 2012), Cargèse, France, 2012. [ bib | .pdf | Abstract ]
Prudent capacity planning to meet their clients future computational needs is one of the major issues cloud computing providers face today. By offering resource reservations in advance, providers gain insight into the projected demand of their customers and can act accordingly. However, customers need to be given an incentive, e.g. discounts granted, to commit early to a provider and to honestly, i.e., truthfully reserve their predicted future resource requirements. Customers may reserve capacity deviating from their truly predicted demand, in order to exploit the mechanism for their own benefit, thereby causing futile costs for the provider. In this paper we prove, using a game theoretic approach, that truthful reservation is the best, i.e., dominant strategy for customers if they are capable to make precise forecasts of their demands and that deviations from truth-telling can be profitable for customers if their demand forecasts are uncertain.
[15] Katja Gilly, Fabian Brosig, Ramon Nou, Samuel Kounev, and Carlos Juiz. Online prediction: Four case studies. In Resilience Assessment and Evaluation of Computing Systems, K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII. Springer-Verlag, Berlin, Heidelberg, 2012. ISBN: 978-3-642-29031-2. [ bib | http | .pdf | Abstract ]
Current computing systems are becoming increasingly complex in nature and exhibit large variations in workloads. These changing environments create challenges to the design of systems that can adapt themselves while maintaining desired Quality of Service (QoS), security, dependability, availability and other non-functional requirements. The next generation of resilient systems will be highly distributed, component-based and service-oriented. They will need to operate in unattended mode and possibly in hostile environments, will be composed of a large number of interchangeable components discoverable at run-time, and will have to run on a multitude of unknown and heterogeneous hardware and network platforms. These computer systems will adapt themselves to cope with changes in the operating conditions and to meet the service-level agreements with a minimum of resources. Changes in operating conditions include hardware and software failures, load variation and variations in user interaction with the system, including security attacks and overwhelming situations. This self adaptation of next resilient systems can be achieved by first online predicting how these situations would be by observation of the current environment. This chapter focuses on the use of online predicting methods, techniques and tools for resilient systems. Thus, we survey online QoS adaptive models in several environments as grid environments, service-oriented architectures and ambient intelligence using different approaches based on queueing networks, model checking, ontology engineering among others.
[16] Nikolas Roman Herbst. Workload Classification and Forecasting. Diploma Thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, 2012. Forschungszentrum Informatik (FZI) Prize "Best Diploma Thesis". [ bib | .pdf | Abstract ]
Virtualization technologies enable dynamic allocation of computing resources to execution environments at run-time. To exploit optimisation potential that comes with these degrees of freedom, forecasts of the arriving work's intensity are valuable information, to continuously ensure a defined quality of service (QoS) definition and at the same time to improve the efficiency of the resource utilisation. Time series analysis offers a broad spectrum of methods for calculation of forecasts based on periodically monitored values. Related work in the field of proactive resource provisioning mostly concentrate on single methods of the time series analysis and their individual optimisation potential. This way, usable forecast results are achieved only in certain situations. In this thesis, established methods of the time series analysis are surveyed and grouped concerning their strengths and weaknesses. A dynamic approach is presented that selects based on a decision tree and direct feedback cycles, capturing the forecast accuracy, the suitable method for a given situation. The user needs to provide only his general forecast objectives. An implementation of the introduced theoretical approach is presented that continuously provides forecasts of the arriving work's intensity in configurable intervals and with controllable computational overhead during run-time. Based on real-world intensity traces, a number of different experiments and a case study is conducted. The results show, that by use of the implementation the relative error of the forecast points in relation to the arriving observations is reduced by 63% in average compared to the results of a statically selected, sophisticated method. In a case study, between 52% and 70% of the violations of a given service level agreement are prevented by applying proactive resource provisioning based on the forecast results of the introduced implementation.
[17] Nikolaus Huber, Fabian Brosig, N. Dingle, K. Joshi, and Samuel Kounev. Providing Dependability and Performance in the Cloud: Case Studies. In Resilience Assessment and Evaluation of Computing Systems, K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII. Springer-Verlag, Berlin, Heidelberg, 2012. ISBN: 978-3-642-29031-2. [ bib | http | .pdf ]
[18] Nikolaus Huber, Marcel von Quast, Fabian Brosig, Michael Hauck, and Samuel Kounev. A Method for Experimental Analysis and Modeling of Virtualization Performance Overhead. In Cloud Computing and Services Science, Ivan Ivanov, Marten van Sinderen, and Boris Shishkov, editors, Service Science: Research and Innovations in the Service Economy, pages 353-370. Springer, New York, 2012. [ bib | DOI | http | .pdf ]
[19] Rüdiger Kapitza, Johannes Behl, Christian Cachin, Tobias Distler, Simon Kuhnle, Seyed Vahid Mohammadi, Wolfgang Schröder-Preikschat, and Klaus Stengel. Cheapbft: resource-efficient byzantine fault tolerance. In Proceedings of the 7th ACM european conference on Computer Systems, Bern, Switzerland, 2012, EuroSys '12, pages 295-308. ACM, New York, NY, USA. 2012. [ bib | DOI | http ]
[20] Samuel Kounev, Nikolaus Huber, Simon Spinner, and Fabian Brosig. Model-based techniques for performance engineering of business information systems. In Business Modeling and Software Design, Boris Shishkov, editor, volume 0109 of Lecture Notes in Business Information Processing (LNBIP), pages 19-37. Springer-Verlag, Berlin, Heidelberg, 2012. [ bib | http | .pdf | Abstract ]
With the increasing adoption of virtualization and the transition towards Cloud Computing platforms, modern business information systems are becoming increasingly complex and dynamic. This raises the challenge of guaranteeing system performance and scalability while at the same time ensuring efficient resource usage. In this paper, we present a historical perspective on the evolution of model-based performance engineering techniques for business information systems focusing on the major developments over the past several decades that have shaped the field. We survey the state-of-the-art on performance modeling and management approaches discussing the ongoing efforts in the community to increasingly bridge the gap between high-level business services and low level performance models. Finally, we wrap up with an outlook on the emergence of self-aware systems engineering as a new research area at the intersection of several computer science disciplines.
[21] Samuel Kounev, Philipp Reinecke, Fabian Brosig, Jeremy T. Bradley, Kaustubh Joshi, Vlastimil Babka, Anton Stefanek, and Stephen Gilmore. Providing dependability and resilience in the cloud: Challenges and opportunities. In Resilience Assessment and Evaluation of Computing Systems, K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII. Springer-Verlag, Berlin, Heidelberg, 2012. ISBN: 978-3-642-29031-2. [ bib | http | .pdf | Abstract ]
Cloud Computing is a novel paradigm for providing data center resources as on demand services in a pay-as-you-go manner. It promises significant cost savings by making it possible to consolidate workloads and share infrastructure resources among multiple applications resulting in higher cost- and energy-efficiency. However, these benefits come at the cost of increased system complexity and dynamicity posing new challenges in providing service dependability and resilience for applications running in a Cloud environment. At the same time, the virtualization of physical resources, inherent in Cloud Computing, provides new opportunities for novel dependability and quality-of-service management techniques that can potentially improve system resilience. In this chapter, we first discuss in detail the challenges and opportunities introduced by the Cloud Computing paradigm. We then provide a review of the state-of-the-art on dependability and resilience management in Cloud environments, and conclude with an overview of emerging research directions.
[22] Seyed Vahid Mohammadi, Markus Bauer, and Adrian Juan Verdejo. Dynamic cloud reconfiguration to meet qos requirements. In Proceedings of the 6th EuroSys Doctoral Workshop (EuroDW 2012), Bern, Switzerland, 2012. [ bib ]
[23] Wolfgang Theilmann, Sergio Garcia Gomez, John Kennedy, Davide Lorenzoli, Christoph Rathfelder, Thomas Roeblitz, and Gabriele Zacco. A Framework for Multi-level SLA Management. In Handbook of Research on Service-Oriented Systems and Non-Functional Properties: Future Directions, Stephan Reiff-Marganiec and Marcel Tilly, editors, pages 470-490. IGI Global, Hershey, PA, USA, 2012. [ bib | http ]
[24] Marco Vieira, Henrique Madeira, Kai Sachs, and Samuel Kounev. Resilience Benchmarking. In Resilience Assessment and Evaluation of Computing Systems, K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII. Springer-Verlag, Berlin, Heidelberg, 2012. ISBN: 978-3-642-29031-2. [ bib | http | .pdf ]

 TOP

Publications 2011

[1] Benjamin Klatt, Franz Brosch, Zoya Durdik, and Christoph Rathfelder. Quality Prediction in Service Composition Frameworks. In 5th Workshop on Non-Functional Properties and SLA Management in Service-Oriented Computing (NFPSLAM-SOC 2011), Paphos, Cyprus, December 5-8, 2011. [ bib | .pdf | Abstract ]
With the introduction of services, software systems have become more flexible as new services can easily be composed from existing ones. Service composition frameworks offer corresponding functionality and hide the complexity of the underlying technologies from their users. However, possibilities for anticipating quality properties of com- posed services before their actual operation are limited so far. While existing approaches for model-based software quality prediction can be used by service composers for determining realizable Quality of Service (QoS) levels, integration of such techniques into composition frameworks is still missing. As a result, high effort and expert knowledge is required to build the system models required for prediction. In this paper, we present a novel service composition process that includes QoS prediction for composed services as an integral part. Furthermore, we describe how composition frameworks can be extended to support this process. With our approach, systematic consideration of service quality during the composition process is naturally achieved, without the need for de- tailed knowledge about the underlying prediction models. To evaluate our work and validate its applicability in different domains, we have integrated QoS prediction support according to our process in two com- position frameworks - a large-scale SLA management framework and a service mashup platform.
[2] Christoph Rathfelder, Benjamin Klatt, Franz Brosch, and Samuel Kounev. Performance Modeling for Quality of Service Prediction in Service-Oriented Systems. IGI Global, Hershey, PA, USA, December 2011. [ bib | DOI | http | Abstract ]
With the introduction of services, systems become more flexible as new services can easily be composed out of existing services. Services are increasingly used in mission-critical systems and applications and therefore considering Quality of Service (QoS) properties is an essential part of the service selection. Quality prediction techniques support the service provider in determining possible QoS levels that can be guaranteed to a customer or in deriving the operation costs induced by a certain QoS level. In this chapter, we present an overview on our work on modeling service-oriented systems for performance prediction using the Palladio Component Model. The prediction builds upon a model of a service-based system, and evaluates this model in order to determine the expected service quality. The presented techniques allow for early quality prediction, without the need for the system being already deployed and operating. We present the integration of our prediction approach into an SLA management framework. The emerging trend to combine event-based communication and Service-Oriented Architecture (SOA) into Event-based SOA (ESOA) induces new challenges to our approach, which are topic of a special subsection.
[3] Fabian Brosig, Nikolaus Huber, and Samuel Kounev. Automated Extraction of Architecture-Level Performance Models of Distributed Component-Based Systems. In 26th IEEE/ACM International Conference On Automated Software Engineering (ASE 2011), November 2011. Oread, Lawrence, Kansas. Acceptance Rate (Full Paper): 14.7% (37/252). [ bib | .pdf | Abstract ]
Modern service-oriented enterprise systems have increasingly complex and dynamic loosely-coupled architectures that often exhibit poor performance and resource efficiency and have high operating costs. This is due to the inability to predict at run-time the effect of dynamic changes in the system environment and adapt the system configuration accordingly. Architecture-level performance models provide a powerful tool for performance prediction, however, current approaches to modeling the execution context of software components are not suitable for use at run-time. In this paper, we analyze the typical online performance prediction scenarios and propose a novel performance meta-model for expressing and resolving parameter and context dependencies, specifically designed for use in online scenarios. We motivate and validate our approach in the context of a realistic and representative online performance prediction scenario based on the SPECjEnterprise2010 standard benchmark.
[4] Christoph Rathfelder, Samuel Kounev, and David Evans. Capacity Planning for Event-based Systems using Automated Performance Predictions. In 26th IEEE/ACM International Conference On Automated Software Engineering (ASE 2011), Oread, Lawrence, Kansas, November 6-12, 2011, pages 352-361. IEEE. November 2011, Acceptance Rate (Full Paper): 14.7% (37/252). [ bib | .pdf | Abstract ]
Event-based communication is used in different domains including telecommunications, transportation, and business information systems to build scalable distributed systems. The loose coupling of components in such systems makes it easy to vary the deployment. At the same time, the complexity to estimate the behavior and performance of the whole system is increased, which complicates capacity planning. In this paper, we present an automated performance prediction method supporting capacity planning for event-based systems. The performance prediction is based on an extended version of the Palladio Component Model - a performance meta-model for component-based systems. We apply this method on a real-world case study of a traffic monitoring system. In addition to the application of our performance prediction techniques for capacity planning, we evaluate the prediction results against measurements in the context of the case study. The results demonstrate the practicality and effectiveness of the proposed approach.
[5] Westermann Dennis, Krebs Rouven, and Happe Jens. Efficient Experiment Selection in Automated Software Performance Evaluations. In Proceedings of the Computer Performance Engineering - 8th European Performance Engineering Workshop (EPEW 2011), Borrowdale, UK, October 12-13, 2011, pages 325-339. Springer. October 2011. [ bib | .pdf ]
[6] Samuel Kounev. Performance Engineering of Business Information Systems - Filling the Gap between High-level Business Services and Low-level Performance Models. In International Symposium on Business Modeling and Software Design (BMSD 2011), Sofia, Bulgaria, July 27-28, 2011, July 2011. [ bib | .pdf ]
[7] Simon Spinner. Evaluating Approaches to Resource Demand Estimation. Master's thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, July 2011. Best Graduate Award from the Faculty of Informatics. [ bib | .pdf ]
[8] Michael Hauck, Michael Kuperberg, Nikolaus Huber, and Ralf Reussner. Ginpex: Deriving Performance-relevant Infrastructure Properties Through Goal-oriented Experiments. In Proceedings of the 7th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2011), June 20-24, 2011, pages 53-62. ACM, New York, NY, USA. June 2011. [ bib | DOI | www: | .pdf ]
[9] Benjamin Klatt, Christoph Rathfelder, and Samuel Kounev. Integration of event-based communication in the palladio software quality prediction framework. In Proceedings of the joint ACM SIGSOFT conference - QoSA and ACM SIGSOFT symposium - ISARCS on Quality of software architectures - QoSA and architecting critical systems - ISARCS (QoSA-ISARCS 2011), Boulder, Colorado, USA, June 20-24, 2011, pages 43-52. SIGSOFT, ACM, New York, NY, USA. June 2011. [ bib | DOI | http | .pdf | Abstract ]
Today, software engineering is challenged to handle more and more large-scale distributed systems with guaranteed quality-of-service. Component-based architectures have been established to build such systems in a more structured and manageable way. Modern architectures often utilize event-based communication which enables loosely-coupled interactions between components and leads to improved system scalability. However, the loose coupling of components makes it challenging to model such architectures in order to predict their quality properties, e.g., performance and reliability, at system design time. In this paper, we present an extension of the Palladio Component Model (PCM) and the Palladio software quality prediction framework, enabling the modeling of event-based communication in component-based architectures. The contributions include: i) a meta-model extension supporting events as first class entities, ii) a model-to-model transformation from the extended to the original PCM, iii) an integration of the transformation into the Palladio tool chain allowing to use existing model solution techniques, and iv) a detailed evaluation of the reduction of the modeling effort enabled by the transformation in the context of a real-world case study.
[10] Samuel Kounev, Fabian Brosig, and Nikolaus Huber. Self-Aware QoS Management in Virtualized Infrastructures (Poster Paper). In 8th International Conference on Autonomic Computing (ICAC 2011), Karlsruhe, Germany, June 14-18, 2011. [ bib | .pdf | Abstract ]
We present an overview of our work-in-progress and long-term research agenda aiming to develop a novel methodology for engineering of self-aware software systems. The latter will have built-in architecture-level QoS models enhanced to capture dynamic aspects of the system environment and maintained automatically during operation. The models will be exploited at run-time to adapt the system to changes in the environment ensuring that resources are utilized efficiently and QoS requirements are satisfied.
[11] Christoph Rathfelder and Benjamin Klatt. Palladio workbench: A quality-prediction tool for component-based architectures. In Proceedings of the 2011 Ninth Working IEEE/IFIP Conference on Software Architecture (WICSA 2011), Boulder, Colorado, USA, June 20-24, 2011, pages 347-350. IEEE Computer Society, Washington, DC, USA. June 2011. [ bib | DOI | http | .pdf | Abstract ]
Today, software engineering is challenged to handle more and more large-scale distributed systems with a guaranteed level of service quality. Component-based architectures have been established to build more structured and manageable software systems. However, due to time and cost constraints, it is not feasible to use a trial and error approach to ensure that an architecture meets the quality of service (QoS) requirements. In this tool demo, we present the Palladio Workbench that permits the modeling of component-based software architectures and the prediction of its quality characteristics (e.g., response time and utilization). Additional to a general tool overview, we will give some insights about a new feature to analyze the impact of event-driven communication that was added in the latest release of the Palladio Component Model (PCM)
[12] Nikolaus Huber, Fabian Brosig, and Samuel Kounev. Model-based Self-Adaptive Resource Allocation in Virtualized Environments. In 6th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2011), Waikiki, Honolulu, HI, USA, May 23-24, 2011, pages 90-99. ACM, New York, NY, USA. May 2011, Acceptance Rate (Full Paper): 27% (21/76). [ bib | DOI | http | .pdf | Abstract ]
The adoption of virtualization and Cloud Computing technologies promises a number of benefits such as increased flexibility, better energy efficiency and lower operating costs for IT systems. However, highly variable workloads make it challenging to provide quality-of-service guarantees while at the same time ensuring efficient resource utilization. To avoid violations of service-level agreements (SLAs) or inefficient resource usage, resource allocations have to be adapted continuously during operation to reflect changes in application workloads. In this paper, we present a novel approach to self-adaptive resource allocation in virtualized environments based on online architecture-level performance models. We present a detailed case study of a representative enterprise application, the new SPECjEnterprise2010 benchmark, deployed in a virtualized cluster environment. The case study serves as a proof-of-concept demonstrating the effectiveness and practical applicability of our approach.
[13] Nikolaus Huber, Marcel von Quast, Michael Hauck, and Samuel Kounev. Evaluating and Modeling Virtualization Performance Overhead for Cloud Environments. In Proceedings of the 1st International Conference on Cloud Computing and Services Science (CLOSER 2011), Noordwijkerhout, The Netherlands, May 7-9, 2011, pages 563 - 573. SciTePress. May 2011, Acceptance Rate: 18/164 = 10.9%, Best Paper Award. [ bib | http | .pdf | Abstract ]
Due to trends like Cloud Computing and Green IT, virtualization technologies are gaining increasing importance. They promise energy and cost savings by sharing physical resources, thus making resource usage more efficient. However, resource sharing and other factors have direct effects on system performance, which are not yet well-understood. Hence, performance prediction and performance management of services deployed in virtualized environments like public and private Clouds is a challenging task. Because of the large variety of virtualization solutions, a generic approach to predict the performance overhead of services running on virtualization platforms is highly desirable. In this paper, we present experimental results on two popular state-of-the-art virtualization platforms, Citrix XenServer 5.5 and VMware ESX 4.0, as representatives of the two major hypervisor architectures. Based on these results, we propose a basic, generic performance prediction model for the two different types of hypervisor architectures. The target is to predict the performance overhead for executing services on virtualized platforms.
[14] Samuel Kounev and Simon Spinner. QPME 2.0 User's Guide. Karlsruhe Institute of Technology, Am Fasanengarten 5, 76131 Karlsruhe, Germany, May 2011. [ bib | http | .pdf ]
[15] Michael Faber. Software Performance Analysis using Machine Learning Techniques. Master's thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, March 2011. [ bib ]
[16] Samuel Kounev, Konstantin Bender, Fabian Brosig, Nikolaus Huber, and Russell Okamoto. Automated Simulation-Based Capacity Planning for Enterprise Data Fabrics. In 4th International ICST Conference on Simulation Tools and Techniques, Barcelona, Spain, March 21-25, 2011, pages 27-36. ICST, Brussels, Belgium, Belgium. March 2011, Acceptance Rate (Full Paper): 29.8% (23/77), ICST Best Paper Award. [ bib | slides | .pdf | Abstract ]
Enterprise data fabrics are gaining increasing attention in many industry domains including financial services, telecommunications, transportation and health care. Providing a distributed, operational data platform sitting between application infrastructures and back-end data sources, enterprise data fabrics are designed for high performance and scalability. However, given the dynamics of modern applications, system sizing and capacity planning need to be done continuously during operation to ensure adequate quality-of-service and efficient resource utilization. While most products are shipped with performance monitoring and analysis tools, such tools are typically focused on low-level profiling and they lack support for performance prediction and capacity planning. In this paper, we present a novel case study of a representative enterprise data fabric, the GemFire EDF, presenting a simulation-based tool that we have developed for automated performance prediction and capacity planning. The tool, called Jewel, automates resource demand estimation, performance model generation, performance model analysis and results processing. We present an experimental evaluation of the tool demonstrating its effctiveness and practical applicability.
[17] Samuel Kounev, Vittorio Cortellessa, Raffaela Mirandola, and David J. Lilja, editors. ICPE'11 - 2nd Joint ACM/SPEC International Conference on Performance Engineering, Karlsruhe, Germany, March 14-16, 2011, New York, NY, USA, March 2011. ACM. [ bib ]
[18] Fabian Brosig. Online performance prediction with architecture-level performance models. In Software Engineering (Workshops) - Doctoral Symposium, February 21-25, 2011, Ralf Reussner, Alexander Pretschner, and Stefan Jähnichen, editors, February 2011, volume 184 of Lecture Notes in Informatics (LNI), pages 279-284. GI, Bonn, Germany. February 2011. [ bib | .pdf | Abstract ]
Today's enterprise systems based on increasingly complex software architectures often exhibit poor performance and resource efficiency thus having high operating costs. This is due to the inability to predict at run-time the effect of changes in the system environment and adapt the system accordingly. We propose a new performance modeling approach that allows the prediction of performance and system resource utilization online during system operation. We use architecture-level performance models that capture the performance-relevant information of the software architecture, deployment, execution environment and workload. The models will be automatically maintained during operation. To derive performance predictions, we propose a tailorable model solving approach to provide flexibility in view of prediction accuracy and analysis overhead.
[19] Christof Momm and Rouven Krebs. A Qualitative Discussion of Different Approaches for Implementing Multi-Tenant SaaS Offerings short paper. In Proceedings of the Software Engineering 2011 - Workshopband (ESoSyM-2011), Ralf Reussner and Stefan Pretschner, Alexander amd Jähnichen, editors, Karlsruhe, Germany, February 21, 2011, pages 139-150. Fachgruppe OOSE der Gesellschaft für Informatik und ihrer Arbeitskreise, Bonner Köllen Verlag, Bonn-Buschdorf, Germany. February 2011. [ bib | .pdf ]
[20] Nikolas Roman Herbst. Quantifying the Impact of Configuration Space for Elasticity Benchmarking. Study Thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, 2011. [ bib | .pdf | Abstract ]
Elasticity is the ability of a software system to dynamically adapt the amount of the resources it provides to clients as their workloads increase or decrease. In the context of cloud computing, automated resizing of a virtual machine's resources can be considered as a key step towards optimisation of a system's cost and energy efficiency. Existing work on cloud computing is limited to the technical view of implementing elastic systems, and definitions of scalability have not been extended to cover elasticity. This study thesis presents a detailed discussion of elasticity, proposes metrics as well as measurement techniques, and outlines next steps for enabling comparisons between cloud computing offerings on the basis of elasticity. I discuss results of our work on measuring elasticity of thread pools provided by the Java virtual machine, as well as an experiment setup for elastic CPU time slice resizing in a virtualized environment. An experiment setup is presented as future work for dynamically adding and removing z/VM Linux virtual machine instances to a performance relevant group of virtualized servers.
[21] Samuel Kounev. Engineering of Self-Aware IT Systems and Services: State-of-the-Art and Research Challenges. In Proceedings of the 8th European Performance Engineering Workshop (EPEW'11), Borrowdale, The English Lake District, October 12-13, 2011. (Keynote Talk). [ bib | .pdf ]
[22] Samuel Kounev. Self-Aware Software and Systems Engineering: A Vision and Research Roadmap. In GI Softwaretechnik-Trends, 31(4), November 2011, ISSN 0720-8928, 2011. Karlsruhe, Germany. [ bib | .html | .pdf ]
[23] Anne Koziolek, Qais Noorshams, and Ralf Reussner. Focussing multi-objective software architecture optimization using quality of service bounds. In Models in Software Engineering, Workshops and Symposia at MODELS 2010, Oslo, Norway, October 3-8, 2010, Reports and Revised Selected Papers, J. Dingel and A. Solberg, editors, 2011, volume 6627 of Lecture Notes in Computer Science, pages 384-399. Springer-Verlag Berlin Heidelberg. 2011. [ bib | DOI | http | .pdf | Abstract ]
Quantitative prediction of non-functional properties, such as performance, reliability, and costs, of software architectures supports systematic software engineering. Even though there usually is a rough idea on bounds for quality of service, the exact required values may be unclear and subject to trade-offs. Designing architectures that exhibit such good trade-off between multiple quality attributes is hard. Even with a given functional design, many degrees of freedom in the software architecture (e.g. component deployment or server configuration) span a large design space. Automated approaches search the design space with multi-objective metaheuristics such as evolutionary algorithms. However, as quality prediction for a single architecture is computationally expensive, these approaches are time consuming. In this work, we enhance an automated improvement approach to take into account bounds for quality of service in order to focus the search on interesting regions of the objective space, while still allowing trade-offs after the search. We compare two different constraint handling techniques to consider the bounds. To validate our approach, we applied both techniques to an architecture model of a component-based business information system. We compared both techniques to an unbounded search in 4 scenarios. Every scenario was examined with 10 optimization runs, each investigating around 1600 architectural candidates. The results indicate that the integration of quality of service bounds during the optimization process can improve the quality of the solutions found, however, the effect depends on the scenario, i.e. the problem and the quality requirements. The best results were achieved for costs requirements: The approach was able to decrease the time needed to find good solutions in the interesting regions of the objective space by 25% on average.
[24] Michael Kuperberg, Nikolas Roman Herbst, Joakim Gunnarson von Kistowski, and Ralf Reussner. Defining and Quantifying Elasticity of Resources in Cloud Computing and Scalable Platforms. Technical report, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, 2011. [ bib | http | .pdf | Abstract ]
Elasticity is the ability of a software system to dynamically scale the amount of the resources it provides to clients as their workloads increase or decrease. Elasticity is praised as a key advantage of cloud computing, where computing resources are dynamically added and released. However, there exists no concise or formal definition of elasticity, and thus no approaches to quantify it have been developed so far. Existing work on cloud computing is limited to the technical view of implementing elastic systems, and definitions or scalability have not been extended to cover elasticity. In this report, we present a detailed discussion of elasticity, propose techniques for quantifying and measuring it, and outline next steps to be taken for enabling comparisons between cloud computing offerings on the basis of elasticity. We also present preliminary work on measuring elasticity of resource pools provided by the Java Virtual Machine.
[25] Philipp Meier, Samuel Kounev, and Heiko Koziolek. Automated Transformation of Component-based Software Architecture Models to Queueing Petri Nets. In 19th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS 2011), Singapore, July 25-27, 2011. Acceptance Rate (Full Paper): 41/157 = 26%. [ bib | .pdf ]
[26] Christoph Rathfelder, Benjamin Klatt, and Giovanni Falcone. The Open Reference Case A Reference Use Case for the SLA@SOI Framework. Springer, New York, 2011. [ bib | http ]
[27] Nigel Thomas, Jeremy Bradley, William Knottenbelt, Samuel Kounev, Nikolaus Huber, and Fabian Brosig. Preface. Electronic Notes in Theoretical Computer Science, 275:1 - 3, 2011, Elsevier Science Publishers B. V., Amsterdam, The Netherlands. [ bib | DOI ]

 TOP

Publications 2010

[1] Marco Comuzzi, Constantinos Kotsokalis, Christoph Rathfelder, Wolfgang Theilmann, Ulrich Winkler, and Gabriele Zacco. A framework for multi-level sla management. In Service-Oriented Computing. ICSOC/ServiceWave 2009 Workshops, Asit Dan, Frédéric Gittler, and Farouk Toumani, editors, Stockholm, Sweden, November 23-27, 2010, volume 6275 of Lecture Notes in Computer Science, pages 187-196. Springer, Berlin, Heidelberg. November 2010. [ bib | DOI | http | .pdf | Abstract ]
Service-Oriented Architectures (SOA) represent an architectural shift for building business applications based on loosely-coupled services. In a multi-layered SOA environment the exact conditions under which services are to be delivered can be formally specified by Service Level Agreements (SLAs). However, typical SLAs are just specified at the customer-level and do not allow service providers to manage their IT stack accordingly as they have no insight on how customer-level SLAs translate to metrics or parameters at the various layers of the IT stack. In this paper we present a technical architecture for a multi-level SLA management framework. We discuss the fundamental components and in- terfaces in this architecture and explain the developed integrated framework. Furthermore, we show results from a qualitative evaluation of the framework in the context of an open reference case.
[2] Nikolaus Huber, Marcel von Quast, Fabian Brosig, and Samuel Kounev. Analysis of the Performance-Influencing Factors of Virtualization Platforms. In The 12th International Symposium on Distributed Objects, Middleware, and Applications (DOA 2010), Crete, Greece, October 26, 2010. Springer Verlag, Crete, Greece. October 2010, Acceptance Rate (Full Paper): 33%. [ bib | .pdf | Abstract ]
Nowadays, virtualization solutions are gaining increasing importance. By enabling the sharing of physical resources, thus making resource usage more efficient, they promise energy and cost savings. Additionally, virtualization is the key enabling technology for Cloud Computing and server consolidation. However, the effects of sharing resources on system performance are not yet well-understood. This makes performance prediction and performance management of services deployed in such dynamic systems very challenging. Because of the large variety of virtualization solutions, a generic approach to predict the performance influences of virtualization platforms is highly desirable. In this paper, we present a hierarchical model capturing the major performance-relevant factors of virtualization platforms. We then propose a general methodology to quantify the influence of the identified factors based on an empirical approach using benchmarks. Finally, we present a case study of Citrix XenServer 5.5, a state-of-the-art virtualization platform.
[3] Rouven Krebs. Combination of measurement and model based approaches for performance prediction in service oriented systems. Master's thesis, University of Applied Sciences Karlsruhe, Moltkestr. 30, 76133 Karlsruhe, Germany, October 2010. [ bib ]
[4] Christoph Rathfelder, David Evans, and Samuel Kounev. Predictive Modelling of Peer-to-Peer Event-driven Communication in Component-based Systems. In Proceedings of the 7th European Performance Engineering Workshop (EPEW 2010), Alessandro Aldini, Marco Bernardo, Luciano Bononi, and Vittorio Cortellessa, editors, Bertinoro, Italy, September 23-24, 2010, volume 6342 of Lecture Notes in Computer Science (LNCS), pages 219-235. Springer-Verlag, Berlin, Heidelberg. September 2010. [ bib | .pdf | Abstract ]
The event-driven communication paradigm is used increasingly often to build loosely-coupled distributed systems in many industry domains including telecommunications, transportation, and supply chain management. However, the loose coupling of components in such systems makes it hard for developers to estimate their behaviour and performance under load. Most general purpose performance meta-models for component-based systems provide limited support for modelling event-driven communication. In this paper, we present a case study of a real-life road traffic monitoring system that shows how event-driven communication can be modelled for performance prediction and capacity planning. Our approach is based on the Palladio Component Model (PCM) which we have extended to support event-driven communication. We evaluate the accuracy of our modelling approach in a number of different workload and configuration scenarios. The results demonstrate the practicality and effectiveness of the proposed approach.
[5] Jens Happe, Steffen Becker, Christoph Rathfelder, Holger Friedrich, and Ralf H. Reussner. Parametric Performance Completions for Model-Driven Performance Prediction. Performance Evaluation (PE), 67(8):694-716, August 2010, Elsevier. [ bib | DOI | http | .pdf | Abstract ]
Performance prediction methods can help software architects to identify potential performance problems, such as bottlenecks, in their software systems during the design phase. In such early stages of the software life-cycle, only a little information is available about the system�s implementation and execution environment. However, these details are crucial for accurate performance predictions. Performance completions close the gap between available high-level models and required low-level details. Using model-driven technologies, transformations can include details of the implementation and execution environment into abstract performance models. However, existing approaches do not consider the relation of actual implementations and performance models used for prediction. Furthermore, they neglect the broad variety of possible implementations and middleware platforms, possible configurations, and possible usage scenarios. In this paper, we (i) establish a formal relation between generated performance models and generated code, (ii) introduce a design and application process for parametric performance completions, and (iii) develop a parametric performance completion for Message-oriented Middleware according to our method. Parametric performance completions are independent of a specific platform, reflect performance-relevant software configurations, and capture the influence of different usage scenarios. To evaluate the prediction accuracy of the completion for Message-oriented Middleware, we conducted a real-world case study with the SPECjms2007 Benchmark [http://www.spec.org/jms2007/]. The observed deviation of measurements and predictions was below 10% to 15%
[6] Samuel Kounev. Engineering of Next Generation Self-Aware Software Systems: A Research Roadmap. In Emerging Research Directions in Computer Science. Contributions from the Young Informatics Faculty in Karlsruhe. KIT Scientific Publishing, Karlsruhe, Germany, July 2010. [ bib | http | .pdf ]
[7] Samuel Kounev, Fabian Brosig, Nikolaus Huber, and Ralf Reussner. Towards self-aware performance and resource management in modern service-oriented systems. In Proceedings of the 7th IEEE International Conference on Services Computing (SCC 2010), July 5-10, Miami, Florida, USA, Miami, Florida, USA, July 5-10, 2010. IEEE Computer Society. July 2010. [ bib | .pdf | Abstract ]
Modern service-oriented systems have increasingly complex loosely-coupled architectures that often exhibit poor performance and resource efficiency and have high operating costs. This is due to the inability to predict at run-time the effect of dynamic changes in the system environment (e.g., varying service workloads) and adapt the system configuration accordingly. In this paper, we describe a long-term vision and approach for designing systems with built-in self-aware performance and resource management capabilities. We advocate the use of architecture-level performance models extracted dynamically from the evolving system configuration and maintained automatically during operation. The models will be exploited at run-time to adapt the system to changes in the environment ensuring that resources are utilized efficiently and performance requirements are continuously satisfied.
[8] Christoph Rathfelder, Benjamin Klatt, Samuel Kounev, and David Evans. Towards middleware-aware integration of event-based communication into the palladio component model. In Proceedings of the Fourth ACM International Conference on Distributed Event-Based Systems (DEBS 2010), Cambridge, United Kingdom, July 12-15, 2010, pages 97-98. ACM, New York, NY, USA. July 2010. [ bib | DOI | http | .pdf | Abstract ]
The event-based communication paradigm is becoming increasingly ubiquitous as an enabling technology for building loosely-coupled distributed systems. However, the loose coupling of components in such systems makes it hard for developers to predict their performance under load. Most general purpose performance meta-models for component-based systems provide limited support for modelling event-based communication and neglect middleware-specific influence factors. In this poster, we present an extension of our approach to modelling event-based communication in the context of the Palladio Component Model (PCM), allowing to take into account middleware-specific influence factors. The latter are captured in a separate model automatically woven into the PCM instance by means of a model-to-model transformation. As a second contribution, we present a short case study of a real-life road traffic monitoring system showing how event-based communication can be modelled for performance prediction and capacity planning.
[9] Konstantin Bender. Automated Performance Model Extraction of Enterprise Data Fabrics. Master's thesis, Karlsruhe Institute of Technology, Karlsruhe, Germany, Karlsruhe, Germany, May 2010. [ bib ]
[10] Nikolaus Huber, Steffen Becker, Christoph Rathfelder, Jochen Schweflinghaus, and Ralf Reussner. Performance Modeling in Industry: A Case Study on Storage Virtualization. In ACM/IEEE 32nd International Conference on Software Engineering (ICSE 2010), Software Engineering in Practice Track, Cape Town, South Africa, May 2-8, 2010, pages 1-10. ACM, New York, NY, USA. May 2010, Acceptance Rate (Full Paper): 23% (16/71). [ bib | DOI | slides | .pdf | Abstract ]
In software engineering, performance and the integration of performance analysis methodologies gain increasing importance, especially for complex systems. Well-developed methods and tools can predict non-functional performance properties like response time or resource utilization in early design stages, thus promising time and cost savings. However, as performance modeling and performance prediction is still a young research area, the methods are not yet well-established and in wide-spread industrial use. This work is a case study of the applicability of the Palladio Component Model as a performance prediction method in an industrial environment. We model and analyze different design alternatives for storage virtualization on an IBM (Trademark of IBM in USA and/or other countries) system. The model calibration, validation and evaluation is based on data measured on a System z9 (Trademark of IBM in USA and/or other countries) as a proof of concept. The results show that performance predictions can identify performance bottlenecks and evaluate design alternatives in early stages of system development. The experiences gained were that performance modeling helps to understand and analyze a system. Hence, this case study substantiates that performance modeling is applicable in industry and a valuable method for evaluating design decisions.
[11] Rouven Krebs and Christian Hochwarth. Method and system for managing learning materials presented offline. Patent US 94886, April 2010. [ bib ]
[12] Kai Sachs, Stefan Appel, Samuel Kounev, and Alejandro Buchmann. Benchmarking Publish/Subscribe-based Messaging Systems. In Proc. of 2nd International Workshop on Benchmarking of Database Management Systems and Data-Oriented Web Technologies (BenchmarX'10)., Martin Necasky and Eric Pardede, editors, April 2010, volume 6193 of Lecture Notes in Computer Science (LNCS). Springer. April 2010. [ bib | .pdf ]
[13] Rouven Krebs. Method and system for for an adaptive learning strategy. Patent US 70443, March 2010. [ bib ]
[14] Thomas Schuster, Christoph Rathfelder, Nelly Schuster, and Jens Nimis. Comprehensive tool support for iterative soa evolution. In Proceedings of the International Workshop on SOA Migration and Evolution 2010 (SOAME 2010) as part of the 14th European Conference on Software Maintenance and Reengineering (CSMR 2010), March 15, 2010, pages 1-10. [ bib | .pdf | Abstract ]
In recent years continuously changing market situations required IT systems that are flexible and highly responsive to changes of the underlying business processes. The transformation to service-oriented architecture (SOA) concepts, mainly services and loose coupling, promises to meet these demands. However, elevated complexity in management and evolution processes is required for the migration of existing systems towards SOA. Studies in this area of research have revealed a gap between in continuous and actual tool support of development teams throughout the process phases of evolution processes. Thus, in this article we introduce a method that fosters evolution by an iterative approach and illustrate how each phase of this method can be tool-supported.
[15] Jens Happe, Dennis Westermann, Kai Sachs, and Lucia Kapova. Statistical Inference of Software Performance Models for Parametric Performance Completions. In Research into Practice - Reality and Gaps (Proceedings of QoSA 2010), George Heineman, Jan Kofron, and Frantisek Plasil, editors, 2010, volume 6093 of Lecture Notes in Computer Science (LNCS), pages 20-35. Springer. 2010. [ bib | .pdf | Abstract ]
Software performance engineering (SPE) enables software architects to ensure high performance standards for their applications. However, applying SPE in practice is still challenging. Most enterprise applications include a large software basis, such as middleware and legacy systems. In many cases, the software basis is the determining factor of the system's overall timing behavior, throughput, and resource utilization. To capture these influences on the overall system's performance, established performance prediction methods (modelbased and analytical) rely on models that describe the performance-relevant aspects of the system under study. Creating such models requires detailed knowledge on the system's structure and behavior that, in most cases, is not available. In this paper, we abstract from the internal structure of the system under study. We focus our efforts on message-oriented middleware and analyze the dependency between the MOM's usage and its performance. We use statistical inference to conclude these dependencies from observations. For ActiveMQ 5.3, the resulting functions predict the performance with an relative mean square error 0.1.
[16] Michael Hauck, Matthias Huber, Markus Klems, Samuel Kounev, Jörn Müller-Quade, Alexander Pretschner, Ralf Reussner, and Stefan Tai. Challenges and Opportunities of Cloud Computing - Trade-off Decisions in Cloud Computing Architecture. Technical Report 2010-19, Karlsruhe Institue of Technology, Faculty of Informatics, 2010. [ bib | http ]
[17] Samuel Kounev, Simon Spinner, and Philipp Meier. QPME 2.0 - A Tool for Stochastic Modeling and Analysis Using Queueing Petri Nets. In From Active Data Management to Event-Based Systems and More, Kai Sachs, Ilia Petrov, and Pablo Guerrero, editors, volume 6462 of Lecture Notes in Computer Science, pages 293-311. Springer-Verlag, Berlin, Heidelberg, 2010. 10.1007/978-3-642-17226-7_18. [ bib | http | .pdf | Abstract ]
Queueing Petri nets are a powerful formalism that can be exploited for modeling distributed systems and analyzing their performance and scalability. By combining the modeling power and expressiveness of queueing networks and stochastic Petri nets, queueing Petri nets provide a number of advantages. In this paper, we present Version 2.0 of our tool QPME (Queueing Petri net Modeling Environment) for modeling and analysis of systems using queueing Petri nets. The development of the tool was initiated by Samuel Kounev in 2003 at the Technische Universitä Darmstadt in the group of Prof. Alejandro Buchmann. Since then the tool has been distributed to more than 100 organizations worldwide. QPME provides an Eclipse-based editor for building queueing Petri net models and a powerful simulation engine for analyzing the models. After presenting the tool, we discuss ongoing work on the QPME project and the planned future enhancements of the tool.
[18] Philipp Meier. Automated Transformation of Palladio Component Models to Queueing Petri Nets. Master's thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, 2010. FZI Prize "Best Diploma Thesis". [ bib | .pdf ]
[19] Qais Noorshams. Focusing the optimization of software architecture models using non-functional requirements. Master's thesis, Karlsruhe Institute of Technology, Karlsruhe, Germany, 2010. [ bib | .pdf ]
[20] Qais Noorshams, Anne Martens, and Ralf Reussner. Using quality of service bounds for effective multi-objective software architecture optimization. In Proceedings of the 2nd International Workshop on the Quality of Service-Oriented Software Systems (QUASOSS '10), Oslo, Norway, October 4, 2010, 2010, pages 1:1-1:6. ACM, New York, NY, USA. 2010. [ bib | DOI | http | .pdf ]
[21] Marcel von Quast. Automatisierte Performance-Analyse von Virtualisierungsplattformen. Master's thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, 2010. [ bib ]
[22] Kai Sachs. Performance Modeling and Benchmarking of Event-Based Systems. PhD thesis, TU Darmstadt, Karolinenplatz 5, 64289 Darmstadt, Germany, 2010. SPEC Distinguished Dissertation Award. [ bib ]
[23] Arnd Schröter, Gero Mühl, Samuel Kounev, Helge Parzyjegla, and Jan Richling. Stochastic Performance Analysis and Capacity Planning of Publish/Subscribe Systems. In 4th ACM International Conference on Distributed Event-Based Systems (DEBS 2010), July 12-15, Cambridge, United Kingdom, 2010. ACM, New York, USA. 2010, Acceptance Rate: 25%. [ bib | .pdf ]
[24] Victor Pankratius and Samuel Kounev, editors. Emerging Research Directions in Computer Science. Contributions from the Young Informatics Faculty in Karlsruhe, Karlsruhe, Germany, 2010. KIT Scientific Publishing. ISBN: 978-3-86644-508-6. [ bib | http ]

 TOP

Publications 2009

[1] Christoph Rathfelder and Henning Groenda. The Architecture Documentation Maturity Model ADM2. In Proceedings of the 3rd Workshop MDD, SOA und IT-Management (MSI 2009), Oldenburg, Germany, October 6-7, 2009, pages 65-80. GiTO-Verlag, Berlin, Germany. October 2009. [ bib | .pdf | Abstract ]
Today, the architectures of software systems are not stable for their whole lifetime but often adapted driven by business needs. Preserving their quality characteristics beyond each of these changes requires deep knowledge of the requirements and the systems themselves. Proper documentation reduces the risk that knowledge is lost and hence is a base for the system's maintenance in the long-run. However, the influence of architectural documentation on the maintainability of software systems is neglected in current quality assessment methods. They are limited to documentation for anticipated change scenarios and do not provide a general assessment approach. In this paper, we propose a maturity model for architecture documentation. It is shaped relative to growing quality preservation maturity and independent of specific technologies or products. It supports the weighting of necessary effort against reducing long-term risks in the maintenance phase. This allows to take product maintainability requirements into account for selecting an appropriate documentation maturity level.
[2] Samuel Kounev and Kai Sachs. Benchmarking and Performance Modeling of Event-Based Systems. it - Information Technology, 51(5), September 2009, Oldenbourg Wissenschaftsverlag, Munich, Germany. [ bib | Abstract ]
Event-based systems are used increasingly often to build loosely-coupled distributed applications. With their growing popularity and gradual adoption in mission critical areas, the need for novel techniques for benchmarking and performance modeling of event-based systems is increasing. In this article, we provide an overview of the state-of-the-art in this area considering both centralized systems based on message-oriented middleware as well as large-scale distributed publish/subscribe systems. We consider a number of specific techniques for benchmarking and performance modeling, discuss their advantages and disadvantages, and provide references for further information. The techniques we review help to ensure that systems are designed and sized to meet their quality-of-service requirements.
[3] Christoph Rathfelder and Samuel Kounev. Modeling Event-Driven Service-Oriented Systems using the Palladio Component Model. In Proceedings of the 1st International Workshop on the Quality of Service-Oriented Software Systems (QUASOSS 2009), Amsterdam, The Netherlands, August 24-28, 2009, pages 33-38. ACM, New York, USA. August 2009. [ bib | DOI | .pdf | Abstract ]
The use of event-based communication within a Service-Oriented Architecture promises several benefits including more loosely-coupled services and better scalability. However, the loose coupling of services makes it difficult for system developers to estimate the behavior and performance of systems composed of multiple services. Most existing performance prediction techniques for systems using event-based communication require specialized knowledge to build the necessary prediction models. Furthermore, general purpose design-oriented performance models for component-based systems provide limited support for modeling event-based communication. In this paper, we propose an extension of the Palladio Component Model (PCM) that provides natural support for modeling event-based communication. We show how this extension can be exploited to model event-driven service-oriented systems with the aim to evaluate their performance and scalability.
[4] Christoph Rathfelder and Samuel Kounev. Model-based performance prediction for event-driven systems. In Proceedings of the Third ACM International Conference on Distributed Event-Based Systems (DEBS 2009), Nashville, Tennessee, July 6-9, 2009, pages 33:1-33:2. ACM, New York, NY, USA. July 2009. [ bib | DOI | http | .pdf | Abstract ]
The event-driven communication paradigm provides a number of advantages for building loosely coupled distributed systems. However, the loose coupling of components in such systems makes it hard for developers to estimate their behavior and performance under load. Most existing performance prediction techniques for systems using event-driven communication require specialized knowledge to build the necessary prediction models. In this paper, we propose an extension of the Palladio Component Model (PCM) that provides natural support for modeling event-based communication and supports different performance prediction techniques.
[5] Fabian Brosig. Automated Extraction of Palladio Component Models from Running Enterprise Java Applications. Master's thesis, Universität Karlsruhe (TH), Karlsruhe, Germany, June 2009. FZI Prize "Best Diploma Thesis". [ bib ]
[6] Kai Sachs, Samuel Kounev, Stefan Appel, and Alejandro Buchmann. A Performance Test Harness For Publish/Subscribe Middleware. In SIGMETRICS/Performance 2009 International Conference, Seattle, WA, USA, June 15-19, 2009, June 2009. (Demo Paper). [ bib | http | .pdf | Abstract ]
Publish/subscribe is becoming increasingly popular as communication paradigm for loosely-coupled message exchange. It is used as a building block in major new software architectures and technology domains such as Enterprise Service Bus, Enterprise Application Integration, Service-Oriented Architecture and Event-Driven Architecture. The growing adoption of these technologies leads to a strong need for benchmarks and performance evaluation tools in this area. In this demonstration, we present jms2009-PS, a benchmark for publish/subscribe middleware based on the Java Message Service standard interface.
[7] Nikolaus Huber. Performance Modeling of Storage Virtualization. Master's thesis, Universität Karlsruhe (TH), Karlsruhe, Germany, April 2009. GFFT Prize. [ bib ]
[8] Henning Groenda, Christoph Rathfelder, and Ralph Mueller. Best of Eclipse DemoCamps - Ein Erfahrungsbericht vom dritten Karlsruher Eclipse DemoCamp. Eclipse Magazine, 3:8-10, March 2009. [ bib ]
[9] Ramon Nou, Samuel Kounev, Ferran Julia, and Jordi Torres. Autonomic QoS control in enterprise Grid environments using online simulation. Journal of Systems and Software, 82(3):486-502, March 2009, Elsevier Science Publishers B. V., Amsterdam, The Netherlands. [ bib | DOI | http | .pdf | Abstract ]
As Grid Computing increasingly enters the commercial domain, performance and Quality of Service (QoS) issues are becoming a major concern. The inherent complexity, heterogeneity and dynamics of Grid computing environments pose some challenges in managing their capacity to ensure that QoS requirements are continuously met. In this paper, a comprehensive framework for autonomic QoS control in enterprise Grid environments using online simulation is proposed. The paper presents a novel methodology for designing autonomic QoS-aware resource managers that have the capability to predict the performance of the Grid components they manage and allocate resources in such a way that service level agreements are honored. Support for advanced features such as autonomic workload characterization on-the-fly, dynamic deployment of Grid servers on demand, as well as dynamic system reconfiguration after a server failure is provided. The goal is to make the Grid middleware self-configurable and adaptable to changes in the system environment and workload. The approach is subjected to an extensive experimental evaluation in the context of a real-world Grid environment and its effectiveness, practicality and performance are demonstrated.
[10] Fabian Brosig, Samuel Kounev, and Klaus Krogmann. Automated Extraction of Palladio Component Models from Running Enterprise Java Applications. In Proceedings of the 1st International Workshop on Run-time mOdels for Self-managing Systems and Applications (ROSSA 2009). In conjunction with the Fourth International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS 2009), Pisa, Italy, 2009, pages 10:1-10:10. ACM, New York, NY, USA. 2009. [ bib | .pdf | Abstract ]
Nowadays, software systems have to fulfill increasingly stringent requirements for performance and scalability. To ensure that a system meets its performance requirements during operation, the ability to predict its performance under different configurations and workloads is essential. Most performance analysis tools currently used in industry focus on monitoring the current system state. They provide low-level monitoring data without any performance prediction capabilities. For performance prediction, performance models are normally required. However, building predictive performance models manually requires a lot of time and effort. In this paper, we present a method for automated extraction of performance models of Java EE applications, based on monitoring data collected during operation. We extract instances of the Palladio Component Model (PCM) - a performance meta-model targeted at component-based systems. We evaluate the model extraction method in the context of a case study with a real-world enterprise application. Even though the extraction requires some manual intervention, the case study demonstrates that the existing gap between low-level monitoring data and high-level performance models can be closed.
[11] Fabian Brosig, Samuel Kounev, and Charles Paclat. Using WebLogic Diagnostics Framework to Enable Performance Prediction for Java EE Applications. Oracle Technology Network (OTN) Article, 2009. [ bib | .html | Abstract ]
Throughout the system life cycle, the ability to predict a software system's performance under different configurations and workloads is highly valuable to ensure that the system meets its performance requirements. During the design phase, performance prediction helps to evaluate different design alternatives. At deployment time, it facilitates system sizing and capacity planning. During operation, predicting the effect of changes in the workload or in the system configuration is beneficial for run-time performance management. The alternative to performance prediction is to deploy the system in an environment reflecting the configuration of interest and conduct experiments measuring the system performance under the respective workloads. Such experiments, however, are normally very expensive and time-consuming and therefore often considered not to be economically viable. To enable performance prediction we need an abstraction of the real system that incorporates performance-relevant data, i.e., a performance model. Based on such a model, performance analysis can be carried out. Unfortunately, building predictive performance models manually requires a lot of time and effort. The model must be designed to reflect the abstract system structure and capture its performance-relevant aspects. In addition, model parameters like resource demands or system configuration parameters have to be determined. Given the costs of building performance models, techniques for automatic extraction of models based on observation of the system at run-time are highly desirable. During system development, such models can be exploited to evaluate the performance of system prototypes. During operation, an automatically extracted performance model can be applied for efficient and performance-aware resource management. For example, if one observes an increased user workload and assumes a steady workload growth rate, performance predictions help to determine when the system would reach its saturation point. This way, system operators can react to the changing workload before the system has failed to meet its performance objectives thus avoiding a violation of service level agreements (SLAs). Current performance analysis tools used in industry mostly focus on profiling and monitoring transaction response times and resource consumption. The tools often provide large amounts of low level data while important information needed for building performance models is missing, e.g., the resource demands of individual components. In this article, we present a method for automated extraction of performance models for Java EE applications during operation. We implemented the method in a tool prototype and evaluated its effectiveness in the context of a case study with an early prototype of the SPECjEnterprise2009 benchmark application which in the following we will refer to as SPECjEnterprise2009_pre. (SPECjEnterprise2009 is the successor benchmark of the SPECjAppServer2004 benchmark developed by the Standard Performance Evaluation Corp. [SPEC]; SPECjEnterprise is a trademark of SPEC. The SPECjEnterprise2009 results or findings in this publication have not been reviewed or accepted by SPEC, therefore no comparison nor performance inference can be made against any published SPEC result.) The target Java EE platform we consider is Oracle WebLogic Server (WLS). The extraction is based on monitoring data that is collected during operation using the WebLogic Diagnostics Framework (WLDF). As a performance model, we selected the Palladio Component Model (PCM). PCM is a sophisticated performance modeling framework with mature tool support. In contrast to low level mathematical models like, e.g., queueing networks, PCM is a high-level UML-like design-oriented model that captures the performance-relevant aspects of the system architecture. This makes PCM models easy to understand and use by software developers. We begin by providing some background on the technologies we use, focusing on the WLDF monitoring framework and the PCM models. We then describe the model extraction method in more detail. Finally, we present the case study we conducted and conclude with a summary.
[12] Samuel Kounev. Wiley Encyclopedia of Computer Science and Engineering, edited by Benjamin W. Wah, chapter Software Performance Evaluation. Wiley-Interscience, John Wiley & Sons Inc., 2009. [ bib | http | .pdf | Abstract ]
Modern software systems are expected to satisfy increasingly stringent requirements for performance and scalability. To avoid the pitfalls of inadequate quality of service, it is important to evaluate the expected performance and scalability characteristics of systems during all phases of their life cycle. At every stage, performance evaluation is carried out with a specific set of goals and constraints. In this article, we present an overview of the major methods and techniques for software performance evaluation. We start by considering the different types of workload models that are typically used in performance evaluation studies. We then discuss performance measurement techniques including platform benchmarking, application profiling and system load testing. Following this, we survey the most common methods and techniques for performance modeling of software systems. We consider the major types of performance models used in practice and discuss their advantages and disadvantages. Finally, we briefly discuss operational analysis as an alternative to queueing theoretic methods.
[13] Samuel Kounev and Christofer Dutz. QPME - A Performance Modeling Tool Based on Queueing Petri Nets. ACM SIGMETRICS Performance Evaluation Review (PER), Special Issue on Tools for Computer Performance Modeling and Reliability Analysis, 36(4):46-51, 2009, ACM, New York, NY, USA. [ bib | .pdf | Abstract ]
Queueing Petri nets are a powerful formalism that can be exploited for modeling distributed systems and analyzing their performance and scalability. By combining the modeling power and expressiveness of queueing networks and stochastic Petri nets, queueing Petri nets provide a number of advantages. In this paper, we present QPME (Queueing Petri net Modeling Environment) - a tool that supports the modeling and analysis of systems using queueing Petri nets. QPME provides an Eclipse-based editor for designing queueing Petri net models and a powerful simulation engine for analyzing the models. After presenting the tool, we discuss the ongoing work on the QPME project and the planned future enhancements of the tool.
[14] Gero Mühl, Arnd Schröter, Helge Parzyjegla, Samuel Kounev, and Jan Richling. Stochastic Analysis of Hierarchical Publish/Subscribe Systems. In Proceedings of the 15th International European Conference on Parallel and Distributed Computing (Euro-Par 2009), Delft, The Netherlands, August 25-28, 2009., 2009. Springer Verlag. 2009, Acceptance Rate (Full Paper): 33%. [ bib | http | .pdf | Abstract ]
With the gradual adoption of publish/subscribe systems in mission critical areas, it is essential that systems are subjected to rigorous performance analysis before they are put into production. However, existing approaches to performance modeling and analysis of publish/subscribe systems suffer from many limitations that seriously constrain their practical applicability. In this paper, we present a generalized method for stochastic analysis of publish/subscribe systems employing identity-based hierarchical routing. The method is based on an analytical model that addresses the major limitations underlying existing work in this area. In particular, it supports arbitrary broker overlay topologies and allows to set workload parameters, e.g., publication rates and subscription lifetimes, individually for each broker. The analysis is illustrated by a running example that helps to gain better understanding of the derived mathematical relationships.
[15] Kai Sachs, Samuel Kounev, Stefan Appel, and Alejandro Buchmann. Benchmarking of Message-Oriented Middleware (Poster Paper). In Proceedings of the 3rd ACM International Conference on Distributed Event-Based Systems (DEBS-2009), Nashville, TN, USA, July 6-9, 2009, 2009. ACM, New York, NY, USA. 2009. [ bib | http | .pdf | Abstract ]
In this poster, we provide an overview of our past and current research in the area of Message-Oriented Middleware (MOM) performance benchmarks. Our main research motivation is a) to gain a better understanding of the performance of MOM, b) to show how to use benchmarks for the evaluation of performance aspects and c)to establish performance modeling techniques. For a better understanding, we first introduce the Java Message Service (JMS) standard. Afterwards, we provide an overview of our work on MOM benchmark development, i.e., we present the SPECjms2007 benchmark and the new jms2009-PS, a test harness designed specifically for JMS-based pub/sub. We outline a new case study with jms2009-PS and present first results of our work-in-progress.
[16] Kai Sachs, Samuel Kounev, Jean Bacon, and Alejandro Buchmann. Benchmarking message-oriented middleware using the SPECjms2007 benchmark. Performance Evaluation, 66(8):410-434, 2009, Elsevier Science Publishers B. V., Amsterdam, The Netherlands. [ bib | DOI | http | .pdf | Abstract ]
Message-oriented middleware (MOM) is at the core of a vast number of financial services and telco applications, and is gaining increasing traction in other industries, such as manufacturing, transportation, health-care and supply chain management. Novel messaging applications, however, pose some serious performance and scalability challenges. In this paper, we present a methodology for performance evaluation of MOM platforms using the SPECjms2007 benchmark which is the world's first industry-standard benchmark specialized for MOM. SPECjms2007 is based on a novel application in the supply chain management domain designed to stress MOM infrastructures in a manner representative of real-world applications. In addition to providing a standard workload and metrics for MOM performance, the benchmark provides a flexible performance analysis framework that allows users to tailor the workload to their requirements. The contributions of this paper are: i) we present a detailed workload characterization of SPECjms2007 with the goal to help users understand the internal components of the workload and the way they are scaled, ii) we show how the workload can be customized to exercise and evaluate selected aspects of MOM performance, iii) we present a case study of a leading JMS platform, the BEA WebLogic server, conducting an in-depth performance analysis of the platform under a number of different workload and configuration scenarios. The methodology we propose is the first one that uses an industry-standard benchmark providing both a representative workload as well as the ability to customize it to evaluate the features of MOM platforms selectively.

 TOP

Publications 2008

[1] Christoph Rathfelder and Henning Groenda. Towards an Architecture Maintainability Maturity Model (AM3). Softwaretechnik-Trends, 28(4):3-7, November 2008, GI (Gesellschaft fuer Informatik), Bonn, Germany. [ bib | .pdf ]
[2] Christoph Rathfelder, Henning Groenda, and Ralf Reussner. Software Industrialization and Architecture Certification. In Industrialisierung des Software-Managements: Fachtagung des GI-Fachausschusses Management der Anwendungsentwicklung und -Wartung im Fachbereich Wirtschaftsinformatik (WI-MAW), Georg Herzwurm and Martin Mikusz, editors, volume 139 of Lecture Notes in Informatics (LNI), pages 169-180. November 2008. [ bib ]
[3] Christof Momm and Christoph Rathfelder. Model-based Management of Web Service Compositions in Service-Oriented Architectures. In MDD, SOA und IT-Management (MSI 2008), Ulrike Steffens, Jan Stefan Addicks, and Niels Streekmann, editors, Oldenburg, Germany, September 24, 2008, pages 25-40. GITO-Verlag, Berlin, Germany. September 2008. [ bib | .pdf | Abstract ]
Web service compositions (WSC), as part of a service-oriented architecture (SOA), have to be managed to ensure compliance with guaranteed service levels. In this context, a high degree of automation is desired, which can be achieved by applying autonomic computing concepts. This paper particularly focuses the autonomic management of semi-dynamic compositions. Here, for each included service several variants are available that differ with regard to the service level they offer. Given this scenario, we first show how to instrument WSC in order to allow a controlling of the service level through switching the employed service variant. Second, we show how the desired self-manageability can be designed and implemented by means of a WSC manageability infrastructure. The presented approach is based on widely accepted methodologies and standards from the area of application and web service management, in particular the WBEM standards.
[4] Christoph Rathfelder and Henning Groenda. iSOAMM: An independent SOA Maturity Model. In Proceedings of the 8th IFIP International Conference on Distributed Applications and Interoperable Systems (DAIS 2008), Olso, Norway, June 4-6, 2008, volume 5053/2008 of Lecture Notes in Computer Science (LNCS), pages 1-15. Springer-Verlag, Berlin, Heidelberg. June 2008. [ bib | http | .pdf | Abstract ]
The implementation of an enterprise-wide Service Oriented Architecture (SOA) is a complex task. In most cases, evolutional approaches are used to handle this complexity. Maturity models are a possibility to plan and control such an evolution as they allow evaluating the current maturity and identifying current shortcomings. In order to support an SOA implementation, maturity models should also support in the selection of the most adequate maturity level and the deduction of a roadmap to this level. Existing SOA maturity models provide only weak assistance with the selection of an adequate maturity level. Most of them are developed by vendors of SOA products and often used to promote their products. In this paper, we introduce our independent SOA Maturity Model (iSOAMM), which is independent of the used technologies and products. In addition to the impacts on IT systems, it reflects the implications on organizational structures and governance. Furthermore, the iSOAMM lists the challenges, benefits and risks associated with each maturity level. This enables enterprises to select the most adequate maturity level for them, which is not necessarily the highest one.
[5] Samuel Kounev, Ian Gorton, and Kai Sachs, editors. Performance Evaluation: Metrics, Models and Benchmarks, Proceedings of the 2008 SPEC International Performance Evaluation Workshop (SIPEW 2008), Darmstadt, Germany, June 27-28, volume 5119 of Lecture Notes in Computer Science (LNCS), Heidelberg, Germany, June 2008. Springer. [ bib | http | Abstract ]
This book constitutes the refereed proceedings of the SPEC International Performance Evaluation Workshop, SIPEW 2008, held in Darmstadt, Germany, in June 2008. The 17 revised full papers presented together with 3 keynote talks were carefully reviewed and selected out of 39 submissions for inclusion in the book. The papers are organized in topical sections on models for software performance engineering; benchmarks and workload characterization; Web services and service-oriented architectures; power and performance; and profiling, monitoring and optimization.
[6] Christof Momm, Christoph Rathfelder, Ignacio Pérez Hallerbach, and Sebastian Abeck. Manageability Design for an Autonomic Management of Semi-Dynamic Web Service Compositions. In Proceedings of the Network Operations and Management Symposium (NOMS 2008), Salvador, Bahia, Brazil, April 7-11, 2008, pages 839-842. IEEE. April 2008. [ bib | DOI | .pdf | Abstract ]
Web service compositions (WSC), as part of a service- oriented architecture (SOA), have to be managed to ensure compliance with guaranteed service levels. In this context, a high degree of automation is desired, which can be achieved by applying autonomic computing concepts. This paper particularly focuses the autonomic management of semi-dynamic compositions. Here, for each included service several variants are available that differ with regard to the service level they offer. Given this scenario, we first show how to instrument WSC in order to allow a controlling of the service level through switching the employed service variant. Second, we show how the desired self-manageability can be designed and implemented by means of a WSC manageability infrastructure. The presented approach is based on widely accepted methodologies and standards from the area of application and web service management, in particular the WBEM standards.
[7] Samuel Kounev and Kai Sachs. SPECjms2007: A Novel Benchmark and Performance Analysis Framework for Message-Oriented Middleware. DEV2DEV Article, O'Reilly Publishing Group, March 2008. [ bib | .html ]
[8] Jakob Blomer, Fabian Brosig, Andreas Kreidler, Jens Küttel, Achim Kuwertz, Grischa Liebel, Daniel Popovic, Michael Stübs, Alexander M. Turek, Christian Vogel, Thomas Weinstein, and Thomas Wurth. Software Zertifizierung. Technical Report 4/2008, Universität Karlsruhe, Fakultät für Informatik, 2008. [ bib | Abstract ]
Systematische Qualitätssicherung gewinnt im Rahmen des globalenWettbewerbs auch in der Software-Entwicklungsbranche zunehmend an Bedeutung. Vor allem auf dem Weg zur Software-Industrialisierung bzw. zu einer ingenieurmäßigen Software-Entwicklung ist eine durchgängige Qualitätssicherung unabdingbar. Zertifizierungen bieten hierbei die Möglichkeit, die Einhaltung bestimmter Standards und Kriterien durch unabhängige Dritte überprüfen und bescheinigen zu lassen, um die Qualität eines Produktes oder Entwicklungsprozesses zu belegen. Zertifizierungen können sich sowohl auf Produkte und Prozesse als auch auf die Ausbildung und das Wissen von Einzelpersonen beziehen. Da Zertifikate durch unabhängige Prüfinstanzen ausgestellt werden, wird Zertifikaten und deren überprüfbaren Aussagen im Allgemeinen ein deutlich höheres Vertrauen entgegengebracht als Qualitätsversprechen von Software-Herstellern selbst. Unternehmen, die ihre Prozesse beispielsweise nach CMMI zertifizieren lassen, können damit ihre Fähigkeiten unter Beweis stellen, Projekte erfolgreich und mit vorhersagbarer Qualität abschließen zu können. Neben dem Nachweis entsprechender Zertifikate als Diversifikationsmerkmal gegenüber Mitbewerbern können Zertifikate über die Einhaltung von Standards auch durch den Gesetzgeber vorgeschrieben werden. Ein Beispiel hierfür sind Zertifikate aus Hochsicherheitsbereichen wie Atomkraftwerken. Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel an zwei Konferenztagen präsentiert. Die besten Beiträge wurden durch best paper awards ausgezeichnet. Diese gingen an Fabian Brosig für seine Arbeit Cost Benefit Analysis Method (CBAM), an Jakob Blomer für die Arbeit Zertifizierung von Softwarebenchmarks und an Grischa Liebel für die Arbeit SWT - Das Standard Widget Toolkit, denen hiermit noch einmal herzlich zu dieser herausragenden Leistung gratuliert wird. Ergänzend zu den Vorträgen der Seminarteilnehmer wurde ein eingeladener Vortrag gehalten. Herr Dr. Dirk Feuerhelm von der 1&1 Internet AG gab dabei dankenswerterweise in seinem Vortrag mit dem Thema Softskills - Ist das objektorientiert oder modellgetrieben? einen Einblick in die Aufgaben als Leiter der Software-Entwicklung
[9] Franz Brosch, Thomas Goldschmidt, Henning Groenda, Lucia Kapova, Klaus Krogmann, Michael Kuperberg, Anne Martens, Christoph Rathfelder, Ralf Reussner, and Johannes Stammel. Software-industrialisierung. Interner bericht, Universität Karlsruhe, Fakultät für Informatik, Institut für Programmstrukturen und Datenorganisation, Karlsruhe, 2008. [ bib | http | Abstract ]
Die Industrialisierung der Software-Entwicklung ist ein zurzeit sehr stark diskutiertes Thema. Es geht dabei vor allem um die Efizienzsteigerung durch die Steigerung des Standardisierungsgrades, des Automatisierungsgrades sowie eine Erhöhung der Arbeitsteilung. Dies wirkt sich einerseits auf die den Software- Systemen zu Grunde liegenden Architekturen aber auch auf die Entwicklungsprozesse aus. So sind service-orientierte Architekturen ein Beispiel für eine gesteigerte Standardisierung innerhalb von Software-Systemen. Es ist zu berücksichtigen, dass sich die Software-Branche von den klassischen produzierenden Industriezweigen dadurch unterscheidet, dass Software ein immaterielles Produkt ist und so ohne hohe Produktionskosten beliebig oft vervielfältigt werden kann. Trotzdem lassen sich viele Erkenntnisse aus den klassischen Industriezweigen auf die Software-Technik übertragen. Die Inhalte dieses Berichts stammen hauptsächlich aus dem Seminar "Software- Industrialisierung welches sich mit der Professionalisierung der Software- Entwicklung und des Software-Entwurfs beschäftigte. Während die klassische Software-Entwicklung wenig strukturiert ist und weder im Bezug auf Reproduzierbarkeit oder Qualitätssicherung erhöhten Anforderungen genügt, befindet sich die Software-Entwicklung im Rahmen der Industrialisierung in einem Wandel. Dazu zählen arbeitsteiliges Arbeiten, die Einführung von Entwicklungsprozessen mit vorhersagbaren Eigenschaften (Kosten, Zeitbedarf, ...) und in der Folge die Erstellung von Produkten mit garantierbaren Eigenschaften. Das Themenspektrum des Seminars umfasste dabei unter anderem: * Software-Architekturen * Komponentenbasierte Software-Entwicklung * Modellgetriebene Entwicklung * Berücksichtigung von Qualitätseigenschaften in Entwicklungsprozessen Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel an zwei Konferenztagen präsentiert. Der beste Beitrag wurde durch einen Best Paper Award ausgezeichnet. Dieser ging an Benjamin Klatt für seine Arbeit Software Extension Mechanisms, dem hiermit noch einmal herzlich zu dieser herausragenden Leistung gratuliert wird. Ergänzend zu den Vorträgen der Seminarteilnehmer wurde ein eingeladener Vortrag gehalten. Herr Florian Kaltner und Herr Tobias Pohl vom IBM-Entwicklungslabor gaben dabei dankenswerterweise in ihrem Vortrag Einblicke in die Entwicklung von Plugins für Eclipse sowie in die Build-Umgebung der Firmware für die zSeries Mainframe-Server.
[10] Samuel Kounev. QPME (Queueing Petri net Modeling Environment) Homepage. http://descartes.ipd.kit.edu/projects/QPME, 2008. [ bib | http ]
[11] Samuel Kounev, Kai Sachs, Jean Bacon, and Alejandro Buchmann. A Methodology for Performance Modeling of Distributed Event-Based Systems. In Proceedings of the 11th IEEE International Symposium on Object Oriented Real-Time Distributed Computing (ISORC 2008), Orlando, Florida, USA, May 5-7, 2008, 2008, pages 13-22. IEEE Computer Society, Washington, DC, USA. 2008, Acceptance Rate (Full Paper): 30% Best-Paper-Award-Nomination. [ bib | DOI | .pdf | Abstract ]
Distributed event-based systems (DEBS) are gaining increasing attention in new application areas such as transport information monitoring, event-driven supply-chain management and ubiquitous sensor-rich environments. However, as DEBS increasingly enter the enterprise and commercial domains, performance and quality of service issues are becoming a major concern. While numerous approaches to performance modeling and evaluation of conventional request/reply-based distributed systems are available in the literature, no general approach exists for DEBS. This paper is the first to provide a comprehensive methodology for workload characterization and performance modeling of DEBS. A workload model of a generic DEBS is developed and operational analysis techniques are used to characterize the system traffic and derive an approximation for the mean event delivery latency. Following this, a modeling technique is presented that can be used for accurate performance prediction. The paper is concluded with a case study of a real life system demonstrating the effectiveness and practicality of the proposed approach.
[12] Kai Sachs and Samuel Kounev. Kaffeekunde - SPECjms misst Message-oriented Middleware. iX Magazin, Heft 02/2008, Heise Zeitschriften Verlag, 2008. [ bib | http ]

 TOP

Publications 2007

[1] Christoph Rathfelder and Henning Groenda. Geschäftsprozessorientierte Kategorisierung von SOA. In 2. Workshop Bewertungsaspekte serviceorientierter Architekturen, Karlsruhe, Germany, November 13, 2007, pages 11-22. SHAKER Verlag. November 2007. [ bib | .pdf | Abstract ]
Service-Orientierte Architekturen (SOAs) versprechen eine bessere Unterstützung von Geschäftsprozessen. Es gibt jedoch unterschiedliche Interpretationen darüber, was eine Service-Orientierte Architektur (SOA) ist. Da die Verbesserung der Geschäftsprozessunterstützung eines der häufigsten Argumente für SOAs ist, bietet es sich an, die verschiedenen SOA-Varianten nach der damit ermöglichten Prozessunterstützung zu kategorisieren. Bisherige Ansätze zur Kategorisierung sind in vielen Fällen auf bestimmte Technologien oder Standards beschränkt und gehen nur am Rand auf die gegebene Prozessunterstützung ein. In diesem Artikel wird eine solche geschäftsprozessorientierte Kategorisierung von SOAs präsentiert.
[2] Ramon Nou, Samuel Kounev, and Jordi Torres. Building Online Performance Models of Grid Middleware with Fine-Grained Load-Balancing: A Globus Toolkit Case Study. In Formal Methods and Stochastic Models for Performance Evaluation, Proceedings of the 4th European Performance Engineering Workshop (EPEW 2007), Berlin, Germany, September 27-28, 2007, Katinka Wolter, editor, September 2007, volume 4748 of Lecture Notes in Computer Science (LNCS), pages 125-140. Springer Verlag, Heidelberg, Germany. September 2007. [ bib | DOI | http | .pdf | Abstract ]
As Grid computing increasingly enters the commercial domain, performance and Quality of Service (QoS) issues are becoming a major concern. To guarantee that QoS requirements are continuously satisfied, the Grid middleware must be capable of predicting the application performance on the fly when deciding how to distribute the workload among the available resources. One way to achieve this is by using online performance models that get generated and analyzed on the fly. In this paper, we present a novel case study with the Globus Toolkit in which we show how performance models can be generated dynamically and used to provide online performance prediction capabilities. We have augmented the Grid middleware with an online performance prediction component that can be called at any time during operation to predict the Grid performance for a given resource allocation and load-balancing strategy. We evaluate the quality of our performance prediction mechanism and present some experimental results that demonstrate its effectiveness and practicality. The framework we propose can be used to design intelligent QoS-aware resource allocation and admission control mechanisms.
[3] Christof Momm, Christian Mayerl, Christoph Rathfelder, and Sebastian Abeck. A Manageability Infrastructure for the Monitoring of Web Service. In Proceedings of the 14th Annual Workshop of HP Software University Association, H. G. Hegering, H. Reiser, M. Schiffers, and Th. Nebe, editors, Leibniz Computing Center and Munich Network Management Team, Germany, July 8-11, 2007, pages 103-114. Infonomies Consulting, Stuttgart, Germany. July 2007. [ bib | .pdf | Abstract ]
The management of web service composition, where the employed atomic web services as well as the compositions themselves are offered on basis of Service Level Agreements (SLA), implies new requirements for the management infrastructure. In this paper we introduce the conceptual design and implementation for a standard-based and flexible manageability infrastructure offering comprehensive management information for an SLAdriven management of web service compositions. Our solution thereby is based on well-understood methodologies and standards from the area of application and web service management, in particular the WBEM standards.
[4] Christoph Rathfelder. Management in serviceorientierten Architekturen: Eine Managementinfrastruktur für die Überwachung komponierter Webservices. VDM Verlag Dr. Müller, Saarbrücken, Germany, April 2007. [ bib ]
[5] Samuel Kounev and Alejandro Buchmann. On the Use of Queueing Petri Nets for Modeling and Performance Analysis of Distributed Systems. In Petri Net, Theory and Application, Vedran Kordic, editor. Advanced Robotic Systems International, I-Tech Education and Publishing, Vienna, Austria, February 2007. [ bib | http | .pdf | Abstract ]
Predictive performance models are used increasingly throughout the phases of the software engineering lifecycle of distributed systems. However, as systems grow in size and complexity, building models that accurately capture the different aspects of their behavior becomes a more and more challenging task. The challenge stems from the limited model expressiveness on the one hand and the limited scalability of model analysis techniques on the other. This chapter presents a novel methodology for modeling and performance analysis of distributed systems. The methodology is based on queueing Petri nets (QPNs) which provide greater modeling power and expressiveness than conventional modeling paradigms such as queueing networks and generalized stochastic Petri nets. Using QPNs, one can integrate both hardware and software aspects of system behavior into the same model. In addition to hardware contention and scheduling strategies, QPNs make it easy to model software contention, simultaneous resource possession, synchronization, blocking and asynchronous processing. These aspects have significant impact on the performance of modern distributed systems. To avoid the problem of state space explosion, our methodology uses discrete event simulation for model analysis. We propose an efficient and reliable method for simulation of QPNs. As a validation of our approach, we present a case study of a real-world distributed system, showing how our methodology is applied in a step-by-step fashion to evaluate the system performance and scalability. The system studied is a deployment of the industry-standard SPECjAppServer2004 benchmark. A detailed model of the system and its workload is built and used to predict the system performance for several deployment configurations and workload scenarios of interest. Taking advantage of the expressive power of QPNs, our approach makes it possible to model systems at a higher degree of accuracy providing a number of important benefits.
[6] Samuel Kounev and Christofer Dutz. QPME 1.0 User's Guide. Technische Universität Darmstadt, Darmstadt, Germany, January 2007. [ bib | .pdf | Abstract ]
This document describes the software package QPME (Queueing Petri net Modeling Environment), a performance modeling and analysis tool based on the Queueing Petri Net (QPN) modeling formalism. QPN models are more sophisticated than conventional queueing networks and stochastic Petri nets and have greater expressive power. This provides a number of important benefits since it makes it possible to model systems at a higher degree of accuracy. QPME is made of two components: QPE (QPN Editor) and SimQPN (Simulator for QPNs). QPE provides a user-friendly graphical tool for modeling using QPNs based on the Eclipse/GEF framework. SimQPN provides an efficient discrete-event simulation engine for QPNs that makes it possible to analyze models of realistically-sized systems. QPME runs on a wide range of platforms including Windows, Linux and Solaris. QPME is developed and maintained by Samuel Kounev and Christofer Dutz.
[7] Samuel Kounev, Ramon Nou, and Jordi Torres. Using QPN models for QoS Control in Grid Middleware. Technical Report UPC-DAC-RR-CAP-2007-4, Computer Architecture Department, Technical University of Catalonia (UPC), Spain, 2007. [ bib ]
[8] Samuel Kounev, Ramon Nou, and Jordi Torres. Autonomic QoS-Aware Resource Management in Grid Computing using Online Performance Models. In Proceedings of the Second International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS 2007), Nantes, France, October 23-25, 2007, 2007, pages 1-10. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels, Belgium. 2007. [ bib | DOI | http | .pdf | Abstract ]
As Grid Computing increasingly enters the commercial domain, performance and Quality of Service (QoS) issues are becoming a major concern. The inherent complexity, heterogeneity and dynamics of Grid computing environments pose some challenges in managing their capacity to ensure that QoS requirements are continuously met. In this paper, an approach to autonomic QoS-aware resource management in Grid computing based on online performance models is proposed. The paper presents a novel methodology for designing autonomic QoS-aware resource managers that have the capability to predict the performance of the Grid components they manage and allocate resources in such a way that service level agreements are honored. The goal is to make the Grid middleware self-configurable and adaptable to changes in the system environment and workload. The approach is subjected to an extensive experimental evaluation in the context of a real-world Grid environment and its effectiveness, practicality and performance are demonstrated.
[9] Ramon Nou and Samuel Kounev. Preliminary Analysis of Globus Toolkit 4 to Create Prediction Models. Technical Report UPC-DAC-RR-2007-37, Computer Architecture Department, Technical University of Catalonia (UPC), Spain, 2007. [ bib | Abstract ]
As Data Grids become more commonplace, large data sets are being replicated and distributed to multiple sites, leading to the problem of determining which replica can be accessed most efficiently. The answer to this question can depend on many factors, including physical characteristics of the resources and the load behavior on the CPUs, networks, and storage devices that are part of the end-to-end path linking possible sources and sinks. We develop a predictive framework that combines (1) integrated instrumentation that collects information about the end-to-end performance of past transfers, (2) predictors to estimate future transfer times, and (3) a data delivery infrastructure that provides users with access to both the raw data and our predictions. We evaluate the performance of our predictors by applying them to log data collected from a wide area testbed. These preliminary results provide insights into the effectiveness of using predictors in this situation.
[10] Peter Pietzuch, David Eyers, Samuel Kounev, and Brian Shand. Towards a Common API for Publish/Subscribe. In Proceedings of the 2007 Inaugural International Conference on Distributed Event-Based Systems (DEBS 2007), Toronto, Canada, June 20-22, 2007, Hans-Arno Jacobsen, Gero Mühl, and Michael A. Jaeger, editors, 2007, volume 233 of ACM International Conference Proceeding Series, pages 152-157. ACM, New York, NY, USA. 2007. [ bib | DOI | http | .pdf | Abstract ]
Over the last decade a wide range of publish/subscribe (pub/sub) systems have come out of the research community. However, there is little consensus on a common pub/sub API, which would facilitate innovation, encourage application building, and simplify the evaluation of existing prototypes. Industry pub/sub standards tend to be overly complex, technology-centric, and hard to extend, thus limiting their applicability in research systems. In this paper we propose a common API for pub/sub that is tailored towards research requirements. The API supports three levels of compliance (with optional extensions): the lowest level specifies abstract operations without prescribing an implementation or data model; medium compliance describes interactions using a light-weight XML-RPC mechanism; finally, the highest level of compliance enforces an XML-RPC data model, enabling systems to understand each other's event and subscription semantics. We show that, by following this flexible approach with emphasis on extensibility, our API can be supported by many prototype systems with little effort.
[11] Christoph Rathfelder. Eine Managementinfrastruktur für die überwachung komponierter Webservices. Master's thesis, Universität Karlsruhe (TH), Karlsruhe, 2007. [ bib | .pdf ]
[12] Kai Sachs, Samuel Kounev, Jean Bacon, and Alejandro Buchmann. Workload Characterization of the SPECjms2007 Benchmark. In Formal Methods and Stochastic Models for Performance Evaluation, Proceedings of the 4th European Performance Engineering Workshop (EPEW 2007), Berlin, Germany, September 27-28, 2007, Katinka Wolter, editor, 2007, volume 4748 of Lecture Notes in Computer Science (LNCS), pages 228-244. Springer Verlag, Heidelberg, Germany. 2007. [ bib | DOI | http | .pdf | Abstract ]
Message-oriented middleware (MOM) is at the core of a vast number of financial services and telco applications, and is gaining increasing traction in other industries, such as manufacturing, transportation, health-care and supply chain management. There is a strong interest in the end user and analyst communities for a standardized benchmark suite for evaluating the performance and scalability of MOM. In this paper, we present a workload characterization of the SPECjms2007 benchmark which is the world's first industry-standard benchmark specialized for MOM. In addition to providing standard workload and metrics for MOM performance, the benchmark provides a flexible performance analysis framework that allows users to customize the workload according to their requirements. The workload characterization presented in this paper serves two purposes i) to help users understand the internal components of the SPECjms2007 workload and the way they are scaled, ii) to show how the workload can be customized to exercise and evaluate selected aspects of MOM performance.We discuss how the various features supported by the benchmark can be exploited for in-depth performance analysis of MOM infrastructures.
[13] Kai Sachs, Samuel Kounev, Marc Carter, and Alejandro Buchmann. Designing a Workload Scenario for Benchmarking Message-Oriented Middleware. In Proceedings of the 2007 SPEC Benchmark Workshop, Austin, Texas, January 21, 2007, 2007. SPEC. 2007. [ bib | http | .pdf | Abstract ]
Message-oriented middleware (MOM) is increasingly adopted as an enabling technology for modern information-driven applications like event-driven supply chain management, transport information monitoring, stock trading and online auctions to name just a few. There is a strong interest in the commercial and research domains for a standardized benchmark suite for evaluating the performance and scalability of MOM. With all major vendors adopting JMS (Java Message Service) as a standard interface to MOM servers, there is at last a means for creating a standardized workload for evaluating products in this space. This paper describes a novel application in the supply chain management domain that has been specifically designed as a representative workload scenario for evaluating the performance and scalability of MOM products. This scenario is used as a basis in SPEC's new SPECjms benchmark which will be the world's first industry-standard benchmark for MOM.
[14] SPEC. SPECjms2007 - First industry-standard benchmark for enterprise messaging servers (JMS 1.1). Standard Performance Evaluation Corporation, 2007. SPECtacular Performance Award. [ bib | http ]

 TOP

Publications 2006

[1] Samuel Kounev. Performance Modeling and Evaluation of Distributed Component-Based Systems using Queueing Petri Nets. IEEE Transactions on Software Engineering, 32(7):486-502, July 2006, IEEE Computer Society. [ bib | DOI | http | .pdf | Abstract ]
Performance models are used increasingly throughout the phases of the software engineering lifecycle of distributed component-based systems. However, as systems grow in size and complexity, building models that accurately capture the different aspects of their behavior becomes a more and more challenging task. In this paper, we present a novel case study of a realistic distributed component-based system, showing how Queueing Petri Net models can be exploited as a powerful performance prediction tool in the software engineering process. A detailed system model is built in a step-by-step fashion, validated, and then used to evaluate the system performance and scalability. Along with the case study, a practical performance modeling methodology is presented which helps to construct models that accurately reflect the system performance and scalability characteristics. Taking advantage of the modeling power and expressiveness of Queueing Petri Nets, our approach makes it possible to model the system at a higher degree of accuracy, providing a number of important benefits.
[2] Samuel Kounev. J2EE Performance and Scalability - From Measuring to Predicting. In Proceedings of the 2006 SPEC Benchmark Workshop, Austin, Texas, USA, January 23, 2006, January 2006. SPEC. January 2006. [ bib | http | .pdf | Abstract ]
J2EE applications are becoming increasingly ubiquitous and with their increasing adoption, performance and scalability issues are gaining in importance. For a J2EE application to perform well and be scalable, both the platform on which it is built and the application design must be efficient and scalable. Industry-standard benchmarks such as the SPECjAppServer set of benchmarks help to evaluate the performance and scalability of alternative platforms for J2EE applications, however, they cannot be used to evaluate the performance and scalability of concrete applications built on the selected platforms. In this paper, we present a systematic approach for evaluating and predicting the performance and scalability of J2EE applications based on modeling and simulation. The approach helps to identify and eliminate bottlenecks in the application design and ensure that systems are designed and sized to meet their quality of service requirements. We introduce our approach by showing how it can be applied to the SPECjAppServer2004 benchmark which is used as a representative J2EE application. A detailed model of a SPECjAppServer2004 deployment is built in a step-by-step fashion and then used to predict the behavior of the system under load. The approach is validated by comparing model predictions against measurements on the real system.
[3] Samuel Kounev. Queueing Networks and Markov Chains, edited by Gunter Bolch, Stefan Greiner, Hermann de Meer and Kishor Shridharbhai Trivedi, chapter "Case Studies of Queueing Networks - J2EE Applications", pages 733-745. Wiley-Interscience, John Wiley & Sons Inc., 2nd edition, 2006. [ bib | http ]
[4] Samuel Kounev and Alejandro Buchmann. SimQPN - a tool and methodology for analyzing queueing Petri net models by means of simulation. Performance Evaluation, 63(4-5):364-394, 2006, Elsevier Science Publishers B. V., Amsterdam, The Netherlands. [ bib | DOI | http | .pdf ]
[5] Samuel Kounev, Christofer Dutz, and Alejandro Buchmann. QPME - Queueing Petri Net Modeling Environment. In Proceedings of the 3rd International Conference on Quantitative Evaluation of SysTems (QEST 2006), Riverside, California, USA, September 11-14, 2006, 2006, pages 115-116. IEEE Computer Society, Washington, DC, USA. 2006. [ bib | DOI | http | .pdf | Abstract ]
Queueing Petri nets are a powerful formalism that can be exploited for modeling distributed systems and analyzing their performance and scalability. However, currently available tools for modeling and analysis using queueing Petri nets are very limited in terms of the scalability of the analysis algorithms they provide. Moreover, tools are available only on highly specialized platforms unaccessible to most potential users. In this paper, we present QPME - a Queueing Petri Net Modeling Environment that supports the modeling and analysis of systems using queueing Petri nets. QPME runs on a wide range of platforms and provides a powerful simulation engine that can be used to analyze models of realistically-sized systems.
[6] Christof Momm, Christoph Rathfelder, and Sebastian Abeck. Towards a Manageability Infrastructure for a Management of Process-Based Service Compositions. C&m research report, Cooperation & Management, 2006. [ bib | .pdf | Abstract ]
The management of process-oriented service composition within a dynamic environment, where the employed core services are offered on service marketplaces and dynamically included into the composition on basis of Service Level Agreements (SLA), demands for a service management application that takes into account the specifics of processoriented compositions and supports their automated provisioning. As a first step towards such an application, in this paper we introduce the conceptual design for an architecture and implementation of an interoperable and flexible manageability infrastructure offering comprehensive monitoring and control functionality for the management of service compositions. To achieve this, our approach is based on well-understood methodologies and standards from the area of application and web service management.
[7] Kai Sachs and Samuel Kounev. Message Types and Interfaces Between Components in SPECjms. Technical Report DVS06-3, SPEC OSG Java Subcommittee, 2006. [ bib ]
[8] Kai Sachs and Samuel Kounev. Workload Scenario for SPECjms - Supermarket Supply Chain. Technical Report DVS06-2, SPEC OSG Java Subcommittee, 2006. [ bib | Abstract ]
Message-oriented middleware (MOM) is increasingly adopted as an enabling technology for modern informationdriven applications like event-driven supply chain management, transport information monitoring, stock trading and online auctions to name just a few. There is a strong interest in the commercial and research domains for a standardized benchmark suite for evaluating the performance and scalability of MOM. With all major vendors adopting JMS (Java Message Service) as a standard interface to MOM servers, there is at last a means for creating a standardized workload for evaluating products in this space. This paper describes a novel application in the supply chain management domain that has been specifically designed as a representative workload scenario for evaluating the performance and scalability of MOM products. This scenario is used as a basis in SPEC�s new SPECjms benchmark which will be the world�s first industry-standard benchmark for MOM.

 TOP

Publications 2005

[1] Samuel Kounev. Performance Engineering of Distributed Component-Based Systems - Benchmarking, Modeling and Performance Prediction. Shaker Verlag, Ph.D. Thesis, Technische Universität Darmstadt, Germany, December 2005. Best Dissertation Award from the "Vereinigung von Freunden der Technischen Universität zu Darmstadt e.V.". [ bib | http | .pdf ]
[2] Samuel Kounev. SPECjAppServer2004 - The New Way to Evaluate J2EE Performance. DEV2DEV Article, O'Reilly Publishing Group, 2005. [ bib | .html | Abstract ]
This article presents SPECjAppServer2004-the new industry-standard benchmark for measuring the performance and scalability of J2EE hardware and software platforms. SPECjAppServer2004 is a completely new benchmark and not comparable to the SPEC J2EE benchmarks released in late 2002. This article discusses the business domains and workload modeled by the benchmark, as well as the benchmark design and architecture. The author also explains the meaning of the benchmark metrics, discusses the different purposes the benchmark can be used, and provides some links to additional information.
[3] SPEC. SPECjbb2005 - Industry-standard server-side Java benchmark (J2SE 5.0). Standard Performance Evaluation Corporation, 2005. SPECtacular Award. [ bib | http | Abstract ]
SPECjbb2005 (Java Server Benchmark) is SPEC's benchmark for evaluating the performance of server side Java. Like its predecessor, SPECjbb2000, SPECjbb2005 evaluates the performance of server side Java by emulating a three-tier client/server system (with emphasis on the middle tier). The benchmark exercises the implementations of the JVM (Java Virtual Machine), JIT (Just-In-Time) compiler, garbage collection, threads and some aspects of the operating system. It also measures the performance of CPUs, caches, memory hierarchy and the scalability of shared memory processors (SMPs). SPECjbb2005 provides a new enhanced workload, implemented in a more object-oriented manner to reflect how real-world applications are designed and introduces new features such as XML processing and BigDecimal computations to make the benchmark a more realistic reflection of today's applications.

 TOP

Publications 2004

[1] Samuel Kounev, Börn Weis, and Alejandro Buchmann. Performance Tuning and Optimization of J2EE Applications on the JBoss Platform. Journal of Computer Resource Management, 113, 2004, Computer Measurement Group (CMG). [ bib | .pdf ]
[2] SPEC. SPECjAppServer2004 - Industry-standard enterprise Java application server benchmark (J2EE 1.4). Standard Performance Evaluation Corporation, 2004. SPECtacular Performance Award. [ bib | http ]

 TOP

Publications 2003

[1] Kai S. Juse, Samuel Kounev, and Alejandro Buchmann. PetStore-WS: Measuring the Performance Implications of Web Services. In Proceedings of the 29th International Conference of the Computer Measurement Group on Resource Management and Performance Evaluation of Enterprise Computing Systems (CMG 2003), Dallas, Texas, USA, December 7-12, 2003, 2003, pages 113-123. Computer Measurement Group (CMG). [ bib | .pdf | .pdf | Abstract ]
Web Services are increasingly used to enable loosely coupled integration among heterogeneous systems but are perceived as a source of severe performance degradation. This paper looks at the impact on system performance when introducing Web Service interfaces to an originally tightly coupled application. Using two implementation variants of Sun's Java Pet Store application, one based strictly on the J2EE platform and the other implementing some interfaces as Web Services, performance is compared in terms of the achieved overall throughput, response times and latency.
[2] Samuel Kounev. Messaging Architecture and Asynchronous Interactions in SPECjAppServer. Technical Report TUD03-1, SPEC OSG Java Subcommittee, 2003. [ bib ]
[3] Samuel Kounev and Alejandro Buchmann. Performance Modeling and Evaluation of Large-Scale J2EE Applications. In Proceedings of the 29th International Conference of the Computer Measurement Group on Resource Management and Performance Evaluation of Enterprise Computing Systems (CMG 2003), Dallas, Texas, USA, December 7-12, 2003, 2003, pages 273-283. Computer Measurement Group (CMG). Best-Paper-Award. [ bib | .pdf | .pdf | Abstract ]
Modern J2EE applications are typically based on highly distributed architectures comprising multiple components deployed in a clustered environment. This makes it difficult for deployers to estimate the capacity of the deployment environment needed to guarantee that Service Level Agreements are met. This paper looks at the different approaches to this problem and discusses the difficulties that arise when one tries to apply them to large, real-world systems. The authors study a realistic J2EE application (the SPECjAppServer2002 benchmark) and show how analytical models can be exploited for capacity planning.
[4] Samuel Kounev and Alejandro Buchmann. Performance Modeling of Distributed E-Business Applications using Queueing Petri Nets. In Proceedings of the 2003 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS 2003), Austin, Texas, USA, March 6-8, 2003, 2003, pages 143-155. IEEE Computer Society, Washington, DC, USA. 2003, Best-Paper-Award. [ bib | DOI | http | .pdf | Abstract ]
In this paper we show how Queuing Petri Net (QPN) models can be exploited for performance analysis of distributed e-business systems. We study a real-world application, and demonstrate the benefits, in terms of modelling power and expressiveness, that QPN models provide over conventional modelling paradigms such as Queuing Networks and Petri Nets. As shown, QPNs facilitate the integration of both hardware and software aspects of system behavior in the same model. In addition to hardware contention and scheduling strategies, using QPNs one can easily model simultaneous resource possession, synchronization, blocking and contention for software resources. By validating the models presented through measurements, we show that they are not just powerful as a specification mechanism, but are also very powerful as a performance analysis and prediction tool. However, currently available tools and techniques for QPN analysis are limited. Improved solution methods, which enable larger models to be analyzed, need to be developed. By demonstrating the power of QPNs as a modelling paradigm in realistic scenarios, we hope to motivate further research in this area.

 TOP

Publications 2002

[1] Samuel Kounev and Alejandro Buchmann. Performance Issues in E-Business Systems. In Proceedings of the International Conference on Advances in Infrastructure for e-Business, e-Education, e-Science, and e-Medicine on the Internet (SSGRR 2002w), L'Aquila, Italy, January 21-27, 2002, 2002. [ bib | .pdf | Abstract ]
Performance and scalability issues in e-business systems are gaining in importance as we move from hype and prototypes to real operational systems. Typical for this development is also the emergence of standard benchmarks of which TPC-W for transactional B2C systems and ECperf for performance and scalability measurement of application servers are two of the better known examples. In this paper we present an experience report with the ECperf benchmark defined by Sun and discuss performance issues that we observed in our implementation of the benchmark. Some of these issues are related to the specification of the benchmark, for which we made suggestions how to correct them and others are related to database connectivity, locking patterns, and the need for asynchronous processing.
[2] Samuel Kounev and Alejandro Buchmann. Improving Data Access of J2EE Applications by Exploiting Asynchronous Messaging and Caching Services. In Proceedings of the 28th International Conference on Very Large Data Bases (VLDB 2002), Hong Kong, China, August 20-23, 2002, 2002, pages 574-585. VLDB Endowment, Morgan Kaufmann. 2002, Acceptance Rate (Full Paper): 14% Best-Paper-Award Nomination. [ bib | .pdf | .pdf | Abstract ]
The J2EE platform provides a variety of options for making business data persistent using DBMS technology. However, the integration with existing backend database systems has proven to be of crucial importance for the scalability and performance of J2EE applications, because modern e-business systems are extremely data-intensive. As a result, the data access layer, and the link between the application server and the database server in particular, are very susceptible to turning into a system bottleneck. In this paper we use the ECperf benchmark as an example of a realistic application in order to illustrate the problems mentioned above and discuss how they could be approached and eliminated. In particular, we show how asynchronous, message-based processing could be exploited to reduce the load on the DBMS and improve system performance, scalability and reliability. Furthermore, we discuss the major issues related to the correct use of entity beans (the components provided by J2EE for modelling persistent data) and present a number of methods to optimize their performance utilizing caching mechanisms. We have evaluated the proposed techniques through measurements and have documented the performance gains that they provide.
[3] SPEC. SPECjAppServer2001 - Industry-standard enterprise Java application server benchmark (J2EE 1.2). Standard Performance Evaluation Corporation, 2002. [ bib | http | Abstract ]
SPECjAppServer2001 (Java Application Server) is a client/server benchmark for measuring the performance of Java Enterprise Application Servers using a subset of J2EE API's in a complete end-to-end web application. Joining the client-side SPECjvm98 and the server side SPECjbb2000, SPECjAppServer2001 continues the SPEC tradition of giving Java users the most objective and representative benchmark for measuring a system's ability to run Java applications. SPEC has designed the SPECjAppServer2001 benchmark to exercise the Java Enterprise Application Server, the Java Virtual Machine (JVM), as well as the server Systems Under Test (SUT). As a true J2EE application the benchmark will require a functional RDBMS (for JDBC) and a Web Server, but the benchmark has been designed so that the SUT can be a single system. Please note that while the SPECjAppServer2001 suite is still available for purchase, the suite has been retired, no further results are being accepted for publication, and support is no longer provided.
[4] SPEC. SPECjAppServer2002 - Industry-standard enterprise Java application server benchmark (J2EE 1.3). Standard Performance Evaluation Corporation, 2002. [ bib | http | Abstract ]
SPECjAppServer2002 (Java Application Server) is a client/server benchmark for measuring the performance of Java Enterprise Application Servers using a subset of J2EE APIs in a complete end-to-end web application. It is the same as SPECjAppServer2001 (released in September 2002) except that the Enterprise Java Beans (EJBs) are defined using the EJB 2.0 specification instead of the EJB 1.1 specification. SPECjAppServer2002 can therefore take advantage of several EJB 2.0 features such as local interfaces, the EJB-QL query language, and Container Managed Relationships (CMR) between entity beans. Joining the client-side SPECjvm98 and the server side SPECjbb2000, SPECjAppServer2002 and SPECjAppServer2001 continue the SPEC tradition of giving Java users the most objective and representative benchmarks for measuring a system's ability to run Java applications. SPEC has designed the SPECjAppServer2002 benchmark to exercise the Java Enterprise Application Server, the Java Virtual Machine (JVM), as well as the server Systems Under Test (SUT). As a true J2EE application the benchmark will require a functional RDBMS (for JDBC) and a Web Server, but the benchmark has been designed so that the SUT can be a single system. Please note that while the SPECjAppServer2002 suite is still available for purchase, the suite has been retired, no further results are being accepted for publication, and support is no longer provided.

 TOP

Publications 2001

[1] Samuel Kounev. Eliminating ECperf Persistence Bottlenecks when using RDBMS with Pessimistic Concurrency Control. Technical report, ECperf Expert Group at Sun Microsystems Inc., 2001. [ bib | .pdf ]
[2] Samuel Kounev. A Capacity Planning Methodology for Distributed E-Commerce Applications. Technical report, Technische Universität Darmstadt, Germany, 2001. [ bib | .pdf ]
[3] Samuel Kounev. Performance Prediction, Sizing and Capacity Planning for Distributed E-Commerce Applications. Technical report, Technische Universität Darmstadt, Germany, 2001. [ bib | .pdf ]

 TOP

Publications 1999

[1] Samuel Kounev. Design and Development of an Electronic Commerce Environment. Master's thesis, University of Sofia, Sofia, Bulgaria, 1999. [ bib ]
[2] Samuel Kounev and Kiril Nikolov. The Analysis Phase in the Development of E-Commerce Software Systems. In Proceedings of the Tools Eastern Europe '99 Conference on Technology of Object Oriented Languages and Systems, Sofia-Blagoevgrad, Bulgaria, June 1-4, 1999, 1999. [ bib ]
[3] Plamen Nenov, Samuel Kounev, and Dimiter Mihailov. Distributed Video-Conferencing System Organized for Work on the Internet with the use of Multimedia Server. Journal of Computing and Information, 1999. [ bib ]

 TOP